HCM Data Loader object ClassroomResource can be used to bulk upload classroom resources in Oracle Learning Cloud. Existing locations created as part of Global HR can also be designated as classroom resources.
Below is a sample file to upload classroom resources using HDL:
Goal Weightage can be updated using GoalMeasurement metadata in Goal business object. First, we need to extract the uploaded measurements. Use the below query to extract the details:
Configure HCM Data Loader is a task that is used to define the HCM Data Loader parameters. The parameters are broadly divided into following categories:- Availability, Diagnostic, File Definition, Performance and Scheduling Default. These are the default settings which are applied to any HCM Data Load.
You can override some of these parameters at the individual dat file level i.e. using SET commands or some of the parameters can be overridden while submitting the Import and Load job.
Please note that the default values vary from HDL to HSDL (Spreadsheet Loader).
You can use below mentioned query to get the details from backend using a BIP:
select PARAM_CATEGORY
,PARAM_NAME
,DEFAULT_VALUE
,HSDL_DEFAULT_VALUE
,HDL_ENABLED
,HDL_OVERRIDE_ENABLED
,HSDL_ENABLED
,HSDL_OVERRIDE_ENABLED
,VALUE_LOOKUP_TYPE
,CREATED_BY
,CREATION_DATE
,LAST_UPDATED_BY
,LAST_UPDATE_DATE
from hrc_dl_all_parameters
ORDER By 1,2
Query to get list of overridden values:
select hdap.PARAM_CATEGORY
,hdap.PARAM_NAME
,hdap.DEFAULT_VALUE
,hdap.HSDL_DEFAULT_VALUE
,hdap.HDL_ENABLED
,hdap.HDL_OVERRIDE_ENABLED
,hdap.HSDL_ENABLED
,hdap.HSDL_OVERRIDE_ENABLED
,hdap.VALUE_LOOKUP_TYPE
,hdap.CREATED_BY
,hdap.CREATION_DATE
,hdap.LAST_UPDATED_BY
,hdap.LAST_UPDATE_DATE
,hdpo.OVERRIDE_LEVEL "OVERRIDDEN_AT_LEVEL"
,hdpo.OVERRIDE_VALUE "OVERRIDDEN_VALUE"
from hrc_dl_all_parameters hdap
,hrc_dl_parameter_overrides hdpo
where hdap.parameter_id = hdpo.parameter_id
ORDER By 1,2
Select DISTINCT peevf.element_entry_value_id
,peef.element_entry_id
,petf.base_element_name
,peevf.effective_start_date
,paam.assignment_number
,pivf.base_name
from per_all_assignments_m paam
,pay_element_types_f petf
,pay_element_entries_f peef
,pay_element_entry_values_f peevf
,pay_input_values_f pivf
where 1=1
and paam.person_id = peef.person_id
and peef.element_type_id = petf.element_type_id
and pivf.element_type_id = petf.element_type_id
and peef.element_entry_id = peevf.element_entry_id
and paam.ASSIGNMENT_TYPE in ('E')
and paam.primary_assignment_flag = 'Y'
and petf.base_element_name = 'Dental Plan'
and pivf.base_name = 'Amount'
and paam.assignment_number = 'E1111'
and trunc(sysdate) between petf.effective_start_date and petf.effective_end_date
and trunc(sysdate) between paam.effective_start_date and paam.effective_end_date
and trunc(sysdate) between pivf.effective_start_date and pivf.effective_end_date
Oracle HCM makes use of extended lookups feature to support dependent lookup values. For example, while creating a VISA or Work Permit record for a Person for Singapore, the Category field is dependent upon Type of the pass chosen. The values of Category field are derived from Extended lookup.
Navigate to Setup and Maintenance -> Manage Extended Lookup codes -> Visa Permit Type
Now, let us take an example where we need to load 2 values for categories based on the lookup code S Pass.
Prepare the HDL file in below format:
METADATA|ExtendedLookupCode|ExtendedLookupCodeId|LookupType|LookupCode|LegislationCode|ExtendedLookupCode|ExtendedLookupCodeName|SourceSystemOwner|SourceSystemId MERGE|ExtendedLookupCode||PER_VISA_PERMIT_TYPE|SG_SP|SG|TEST_SP1|S Pass Holder – Test 1|HRC_SQLLOADER|TEST_1 MERGE|ExtendedLookupCode||PER_VISA_PERMIT_TYPE|SG_SP|SG|TEST_SP2|S Pass Holder – Test 2|HRC_SQLLOADER|TEST_2
zip the file and upload using HCM Data Loader from Data Exchange.
On successful load, the new values can be verified from either of following two places on the UI:
There are various cases where more than one attachment should be maintained against employee document record. First of all, we will create a new document type where multiple attachments will be allowed.
Navigate to Setup and Maintenance -> Document Types -> Click on Create
2. Choose multiple occurrences as Yes:
3. Enter the relevant information and submit the record.
4. Prepare the HDL in below format and load the data:
METADATA|DocumentsOfRecord|PersonNumber|DateFrom|DateTo|DocumentType|DocumentName|DocumentNumber|DocumentCode|Country|AttachmentEntityName|Comments|IssuedDate|IssuingAuthority|IssuingCountry|IssuingLocation|Publish|PublishDate|SourceSystemId|SourceSystemOwner MERGE|DocumentsOfRecord|101|2021/01/01||Test Multi Attachments Doc|Multiple Attachments||TMAD_001||||||||||HRC_SQLLOADER_101_TMAD_001|HRC_SQLLOADER
METADATA|DocumentAttachment|SourceSystemOwner|SourceSystemId|PersonNumber|DocumentType|DocumentCode|Title|Description|DataTypeCode|Country|EntityName|File|FileName|MimeType|URL|URLorTextorFileName|DmVersonNumber MERGE|DocumentAttachment|HRC_SQLLOADER|HRC_SQLLOADER_101_TMAD_001_1|101|Test Multi Attachments Doc|TMAD_001|Test Multi Attachments Doc|Test Multi Attachments Doc 1|FILE|||Historical_rating.docx|Historical_rating.docx|||Historical_rating.docx| MERGE|DocumentAttachment|HRC_SQLLOADER|HRC_SQLLOADER_101_TMAD_001_2|101|Test Multi Attachments Doc|TMAD_001|Test Multi Attachments Doc|Test Multi Attachments Doc 2|FILE|||Historical_rating1.docx|Historical_rating1.docx|||Historical_rating1.docx|
5. Place the docx files in BlobFiles folder and zip with the .dat file:
In Oracle Learning cloud, Learning Admin can define the Default access at Course or offering level. Course.dat can be used to load this information.
In order to extract this information from DB in HDL format, please use below query:
SELECT 'MERGE' "METADATA"
,'CourseDefaultAccess' "CourseDefaultAccess"
,to_char(wapf.effective_start_date,'RRRR/MM/DD') "EffectiveStartDate"
,to_char(wapf.effective_end_date,'RRRR/MM/DD') "EffectiveEndDate"
,wapf.access_permission_number "DefaultAccessNumber"
,wlif.learning_item_number "CourseNumber"
,wapf.follow_spec "FollowSpecialization"
,wapf.assign_as_manager "MgrInitialStatus"
,wapf.assign_for_self "SelfInitialStatus"
,wlif.ss_view_mode "SelfViewMode"
FROM WLF_ACCESS_PERMISSIONS_F wapf
,WLF_LEARNING_ITEMS_F wlif
WHERE 1=1
AND wapf.access_permission_id= wlif.access_permission_id
--AND trunc(sysdate) between wapf.effective_start_date and wapf.effective_end_date
AND trunc(sysdate) between wlif.effective_start_date and wlif.effective_end_date
AND wlif.learning_item_number = 'OLC251051'
Also, learners can be assigned using Access Groups.
To extract this information below SQL can be used:
SELECT wlif.learning_item_id
,wlif.learning_item_number course_Number
,wlif_tl.name course_Name
,wlif_tl.description_short description
,wlif_ag.learning_item_number access_group_number
,wlif_tl_ag.name access_group_name
FROM wlf_learning_items_f wlif
,wlf_learning_items_f_tl wlif_tl
,wlf_learning_items_f wlif_ag
,wlf_learning_items_f_tl wlif_tl_ag
,wlf_li_ag_relations_f wlarf
WHERE 1=1
AND wlif_tl.learning_item_id = wlif.learning_item_id
AND wlif_tl.name = 'Test 1234'
AND wlif_ag.learning_item_type = 'ORA_ACCESS_GROUP'
AND wlif_tl_ag.learning_item_id = wlif_ag.learning_item_id
AND wlarf.access_learning_item_id = wlif_ag.learning_item_id
AND wlarf.catalog_learning_item_id = wlif.learning_item_id
AND TRUNC(SYSDATE) BETWEEN wlif_tl_ag.effective_start_date AND wlif_tl_ag.effective_end_date
AND TRUNC(SYSDATE) BETWEEN wlif_ag.effective_start_date AND wlif_ag.effective_end_date
AND TRUNC(SYSDATE) BETWEEN wlif_tl.effective_start_date AND wlif_tl.effective_end_date
AND TRUNC(SYSDATE) BETWEEN wlif.effective_start_date AND wlif.effective_end_date
SELECT 'MERGE' "METADATA"
,'AssignmentGradeSteps' "AssignmentGradeSteps"
,pagsf.EFFECTIVE_START_DATE "EffectiveStartDate"
,pagsf.EFFECTIVE_END_DATE "EffectiveEndDate"
,pagsf.ASSIGN_GRADE_STEP_ID "AssignGradeStepId"
,paam.assignment_number "AssignmentNumber"
,pav.ACTION_CODE "ActionCode"
,parv.ACTION_REASON_CODE "ReasonCode"
,pgsfv.name "GradeStepName"
,NULL "NewGradeStepName" --> to be supplied
FROM PER_ASSIGN_GRADE_STEPS_F pagsf,
PER_GRADE_STEPS_F_VL pgsfv,
PER_ALL_ASSIGNMENTS_M paam,
PER_ACTION_OCCURRENCES pao,
PER_ACTIONS_VL pav,
PER_ACTION_REASONS_VL parv
WHERE pagsf.GRADE_STEP_ID = pgsfv.GRADE_STEP_ID
AND TRUNC(SYSDATE) BETWEEN pagsf.effective_start_date AND pagsf.effective_end_date
AND TRUNC(SYSDATE) BETWEEN pgsfv.effective_start_date AND pgsfv.effective_end_date
AND TRUNC(SYSDATE) BETWEEN paam.effective_start_date AND paam.effective_end_date
AND pagsf.assignment_id = paam.assignment_id
AND paam.assignment_type = 'E'
AND paam.effective_latest_change = 'Y'
AND pagsf.ACTION_OCCURRENCE_ID = pao.ACTION_OCCURRENCE_ID
AND pav.action_id = pao.action_id
AND parv.action_reason_id = pao.action_reason_id
Manage Document Type security profile can be used to define a security profile with a list of various documents tagged to it. The documents can either be included or excluded in this security profile.
HDL can be used to mass create and update the document type security profiles.
Check the below post on how to refresh the business objects to download the latest template for DocumentTypeSecurityProfile.dat:
Goal Library is loaded as Content items using ContentItem.dat. The various attributes and their mapping can be found using GoalLibraryAttributes.xlsx which can be found on metalink using below note id:
Query To Get Goal Library Goals (Doc ID 2228128.1)
How to Load Library Goals (Doc ID 2201757.1)
Please note that ItemText2 (Priority Code) and ItemText12 (External ID) is a required attribute. Although these are not marked as mandatory and HDL load will complete successfully with a NULL value for these, however, the loaded Goals will not be visible in UI.
ItemText12 – Can be any random numeric value.
Sample HDL File:
METADATA|ContentItem|ContentItemId|ContentTypeId|ContentItemCode|Name|ItemDescrlong|ItemTextTl11|DateFrom|DateTo|ItemText1|ItemText2|ItemDate1|ItemDate2|ItemDate3|ItemText3|ItemTextTl1|ItemText5|ItemTextTl2|ItemNumber1|ItemText6|ItemText7|RatingModelId|ItemText9|ItemText10|ItemText12|SourceSystemId|SourceSystemOwner
MERGE|ContentItem||124||Test Goal Libarary 123|Goal Loaded into libarary using content item HDL|Schedule regular headcount and budget review meetings. Facilitate regular reporting distributions to senior management on a biweekly basis|2000/01/01||A|HIGH||||STRETCH||QUANTITATIVE||20|MAX|PERCENT||PERFORMANCE|ALL|1000090000|VIS09090909|VIS
With enhanced profiles (v2), sectionId is a required attribute while loading the Profile Items to a worker’s profile. Section Id can be found using below query:
select SECTION_ID
from HRT_PROFILE_TYP_SECTIONS
WHERE CONTENT_TYPE_ID=110
AND SECTION_CONTEXT LIKE 'PERSON%'
select *
from HRT_PROFILE_SECTIONS_b
where content_type_id=125
order by creation_date desc
Technical consultants encounter many scenario’s where they have to update one or two attributes on the assignment data post initial data load. This can happen during any phase of the new implementation or post go-live. For example, there is a need to update the assignment category for a set of workers. In such scenario’s, the approach is to extract the already uploaded assignment data with minimum required attributes and append the attributes for assignment category and upload it back to Fusion using HDL.
The key thing here is to extract the minimum required data for Assignment and WorkTerms. The mandatory fields required for an update using HDL can be checked from:
My Client Groups -> Data Exchange -> View Business Objects -> Worker -> WorkTerms -> Assignment
Validated Version :- 20D
Sample Query to extract the minimum required attributes for Assignments and Workterms in order to update assignment category (AssignmentCategory):
Sometimes while loading the worker assignments or assigning an employee to a new position, you get “The Selected Position Has No Open headcount. Select A Different Position To Continue” error, one resolution to this is to disable the Position Incumbent Validation at enterprise level.
Follow the below steps to disable the position validation:
Navigate to Setup and Maintenance -> Search -> Manage Enterprise HCM Information:
2. Search for Position Incumbent Validation on Enterprise screen. If not found, then follow the steps 3 – 8, else move directly to step 9.
3. If Position Incumbent Validation is not available on enterprise screen, search for ‘Manage Extensible Flexfields’ and search for ‘PER_ORGANIZATION_INFORMATION_EFF’:
4. Click on edit icon:
5. Under Category, expand Category and choose Enterprise and scroll down:
6. Click on Pages under ‘Enterprise Details’:
7. Click on add icon under ‘Enterprise Details: Associated Context Details’ and select & add ‘Position Incumbent Validation’ context:
8. A new context will be added:
Click on Save and Close and Deploy the flexfield.
9. On successful deployment of flexfield, go back to ‘Manage Enterprise HCM Information’ task and search for ‘Position Incumbent Validation’:
10. Click on Edit button in Correction mode and check the checkbox and save the record:
Below is the sample HDL to load the GL Costing information at department level using Organization.dat file. It is always advisable to get the latest METADATA attributes. Check the below link to see the process of Refreshing the HDL objects:
Refresh Business objects is a process which is used to create the HDL template for a business object with the latest supported attributes.
When to run ‘Refresh Business Objects’ process:
It is advised to run this process:
After every upgrade
After any new DFF/EFF attributes are configured for any HDL supported business objects.
How to run‘Refresh Business Objects’ process:
To run ‘Refresh Business Objects’ process, navigate to My Client Groups -> Data Exchange -> View Business Objects
On the View Business Objects UI, you will see 2 buttons:
a. Refresh Object – This is used when you want to refresh only one object at a time. This is usually run when any new DFF attributes are setup for a particular object. Once you click on ‘Refresh Object’, you will see the below screen:
Click on Submit button to submit the process.
You can then click on ‘View Process Results’ button to check the progress:
Once the process completes, you can see the last refreshed date:
b. Refresh All Objects – This is used when you want to refresh all the business objects. This is usually run post release upgrade.
Post P2T refresh, it is mandatory to mask the actual email Ids of the users in cloned test environment. This is required to avoid any unwanted emails getting triggered to the end users.
Use the below SQL to extract the data in HDL format:
select 'MERGE' "METADATA",
'PersonEmail' "PersonEmail",
pea.email_address_id "EmailAddressId",
pea.person_id "PersonId",
to_char(pea.date_from,'RRRR/MM/DD') "DateFrom",
to_char(pea.date_to,'RRRR/MM/DD') "DateTo",
pea.email_type "EmailType",
-->pea.email_address "1EmailAddress",
replace(replace(pea.email_address,'@','@invalid'),'.co','.xco') "EmailAddress", --> Change the format as per your requirement
(select email_hrc.source_system_id
from hrc_integration_key_map email_hrc
WHERE pea.email_Address_id = email_hrc.surrogate_id) "SourceSystemId",
(select email_hrc.source_system_owner
from hrc_integration_key_map email_hrc
WHERE pea.email_Address_id = email_hrc.surrogate_id) "SourceSystemOwner"
from per_email_addresses pea
where 1=1
and upper(email_address) like '%XXCOMPANY%' --> Change the format as per your requirement
Query with Person Number:
select 'MERGE' "METADATA",
'PersonEmail' "PersonEmail",
pea.email_address_id "EmailAddressId",
pea.person_id "PersonId",
papf.person_number "PersonNumber",
to_char(pea.date_from,'RRRR/MM/DD') "DateFrom",
to_char(pea.date_to,'RRRR/MM/DD') "DateTo",
pea.email_type "EmailType",
-->pea.email_address "1EmailAddress",
replace(replace(pea.email_address,'@','@invalid'),'.co','.xco') "EmailAddress", --> Change the format as per your requirement
(select email_hrc.source_system_id
from hrc_integration_key_map email_hrc
WHERE pea.email_Address_id = email_hrc.surrogate_id) "SourceSystemId",
(select email_hrc.source_system_owner
from hrc_integration_key_map email_hrc
WHERE pea.email_Address_id = email_hrc.surrogate_id) "SourceSystemOwner"
from per_email_addresses pea, per_all_people_f papf
where 1=1
and pea.person_id = papf.person_id
and trunc(sysdate) between papf.effective_start_date and papf.effective_end_date
and upper(email_address) like '%XXCOMPANY%' --> Change the format as per your requirement
Once the data is generated, create the HDL file and load the Worker.dat.
Post data load steps:
Run “Send Personal Data for Multiple Users to LDAP” process. This process will sync up the latest email address from worker email to user.
Login to Security Console and verify for couple of users from UI or you can verify from backend.
Disable the notifications (optional) – Navigate to Security Console -> User Categories -> Notification Preferences -> Uncheck the Enable Notifications:
4. Or you can redirect all the notification to single email address. Search for ‘Manage Approval Groups’ task in ‘Setup and Maintenance”:
Clicking on above task will take you to the BPM Worklist page.
Click on “Administration” button and search for Test Notification Email Address and provide the value where you want to redirect all your notifications:
If you want to completely purge the email data then use below post:
Sometimes it is required to delete the uploaded phones data from Fusion HCM. Below query can be used to create a BIP report which in turn can be used to generate HDL file to delete phones data.
The query can be modified to delete phones data for contacts as well as for other person types.
Supported Version :- 20D
Query:
SELECT 'METADATA|PersonPhone|PhoneId|PhoneType|PersonId|DateFrom|SourceSystemId|SourceSystemOwner' D_ROW
FROM DUAL
union
SELECT 'DELETE|PersonPhone|'||
pp.phone_id||'|'||
pp.phone_type||'|'||
pp.person_id||'|'||
to_char(pp.date_from,'RRRR/MM/DD')||'|'||
hikm.source_system_id||'|'||
hikm.source_system_owner
from per_phones pp,
hrc_integration_key_map hikm
where pp.phone_id= hikm.surrogate_id
and exists (select 1 from per_periods_of_service where person_id=pp.person_id)