SELECT DATA
FROM(
SELECT 'METADATA|GradeRateValue|SourceSystemId|SourceSystemOwner|EffectiveStartDate|EffectiveEndDate|RateId|GradeCode|SetCode|LegislativeDataGroup|MinimumAmount|MaximumAmount|MidValueAmount|ValueAmount' DATA, 1 DATA_ROW
FROM DUAL
UNION ALL
SELECT 'MERGE|GradeRateValue|'
||
hikm.source_system_id
||'|'||
hikm.source_system_owner
||'|'||
TO_CHAR(prvf.effective_start_date,'YYYY/MM/DD')
||'|'||
TO_CHAR(prvf.effective_end_date,'YYYY/MM/DD')
||'|'||
prvf.rate_id
||'|'||
pgf.grade_code
||'|'||
fssv.set_code
||'|'||
ldg.name
||'|'||
prvf.minimum
||'|'||
prvf.maximum
||'|'||
prvf.mid_value
||'|'||
prvf.value DATA, 2 DATA_ROW
FROM per_rate_values_f prvf,
per_legislative_data_groups_tl ldg,
per_grades_f pgf,
fnd_setid_sets_vl fssv,
per_rates_f pr,
hrc_integration_key_map hikm
where 1=1
and pr.legislative_data_group_id = ldg.legislative_data_group_id
and ldg.language = USERENV('LANG')
and trunc(sysdate) between prvf.effective_start_date and prvf.effective_end_date
and upper(prvf.rate_object_type) = 'GRADE'
and prvf.rate_object_id = pgf.grade_id
and trunc(sysdate) between pgf.effective_start_date and pgf.effective_end_date
AND pgf.set_id=fssv.set_id
and prvf.rate_id = pr.rate_id
and trunc(sysdate) between pr.effective_start_date and pr.effective_end_date
and hikm.surrogate_id = prvf.rate_value_id
) ORDER BY DATA_ROW
Sample HDL:
METADATA|GradeRateValue|SourceSystemId|SourceSystemOwner|EffectiveStartDate|EffectiveEndDate|RateId|GradeCode|SetCode|LegislativeDataGroup|MinimumAmount|MaximumAmount|MidValueAmount|ValueAmount
MERGE|GradeRateValue|300000107518119|FUSION|1951/01/01|4712/12/31|300000106295381|ADMIN05|COMMON|KZ Legislative Data Group|870|962|913.5|
SELECT 'METADATA|User|PersonNumber|Username|SourceSystemId|SourceSystemOwner' Headerrow, 1 dataorder
FROM DUAL
UNION
SELECT 'METADATA|User'
||'|'||papf.person_number
||'|'||pea.email_address
||'|'||hikm.source_system_id
||'|'||hikm.source_system_owner Headerrow, 2 dataorder
FROM per_all_people_f papf
,per_email_addresses pea
,per_users pu
,hrc_integration_key_map hikm
WHERE papf.person_id = pea.person_id
and pu.person_id = pea.person_id
and pu.user_id = hikm.surrogate_id
and trunc(sysdate) between papf.effective_start_date and papf.effective_end_date
order by dataorder
select 'METADATA|PersonEmail|EmailAddressId|PersonId|DateFrom|EmailType|PrimaryFlag|EmailAddress|SourceSystemId|SourceSystemOwner'
from dual
UNION ALL
select 'MERGE|PersonEmail' ||'|'||
NULL ||'|'||
papf.person_id ||'|'||
to_char(ppos.date_start,'RRRR/MM/DD') ||'|'||
'W1' ||'|'||
'Y' ||'|'||
'sendmail-discard_'||papf.person_number||'@xyz.com' ||'|'||
'PER_EMAIL_'||papf.person_number ||'|'||
'HRC_SQLLOADER'
from per_all_people_f papf
,per_periods_of_service ppos
where 1=1
and trunc(sysdate) between papf.effective_start_date and papf.effective_end_date
and papf.person_id = ppos.person_id
and ppos.date_start = (select MAX(ppos2.date_start) from per_periods_of_service ppos2
where ppos2.person_id = ppos.person_id)
select hikm.source_system_owner, hikm.source_system_id, ppos.person_id
from per_periods_of_service ppos
,hrc_integration_key_map hikm
where ppos.period_of_service_id = hikm.surrogate_id
and ppos.person_id in (select person_id from per_all_people_f
where person_number = '1894')
The below query extracts the minimum required attributes for absence entry update. In the current example, the absence status is updated to Withdrawn. You can make the changes as per your need:
Below is the sample file (tested on an older version) to load rehire worker records but the structure should remain same. Few attributes may be deprecated now, so you may get an error. Please remove those and it should work fine.
METADATA|Worker|PersonId|EffectiveStartDate|EffectiveEndDate|PersonNumber|StartDate|DateOfBirth|CountryOfBirth|TownOfBirth|CategoryCode|ActionCode|ReasonCode|GUID|SourceSystemOwner|SourceSystemId
MERGE|Worker||2002/10/07|4712/12/31|10090|2002/10/07|1976/01/02||||HIRE|||HRC_SQLLOADER|1190_PERSON
METADATA|PersonName|PersonNameId|EffectiveStartDate|EffectiveEndDate|PersonId(SourceSystemId)|PersonNumber|LegislationCode|NameType|FirstName|MiddleNames|LastName|Honors|KnownAs|PreNameAdjunct|Suffix|MilitaryRank|PreviousLastName|Title|SourceSystemOwner|SourceSystemId
MERGE|PersonName||2002/10/07|4712/12/31|1190_PERSON||US|GLOBAL|Steven|Flord|Smith|||||||MR.|HRC_SQLLOADER|1190_PERSON_NAME
METADATA|WorkRelationship|PersonNumber|LegalEntityId|DateStart|PersonId(SourceSystemId)|PrimaryFlag|LegalEmployerName|ActionCode|WorkerType|ProjectedTerminationDate|ActualTerminationDate|SourceSystemOwner|SourceSystemId
MERGE|WorkRelationship|10090||2002/10/07|1190_PERSON|Y|ABC Limited|HIRE|E||2008/12/07|HRC_SQLLOADER|283_WORK_RELATIONSHIP
MERGE|WorkRelationship|10090||2015/02/02|1190_PERSON|Y|ABC Limited|REHIRE|E|||HRC_SQLLOADER|101062_WORK_RELATIONSHIP
METADATA|WorkTerms|AssignmentId|AssignmentNumber|EffectiveStartDate|EffectiveEndDate|EffectiveSequence|EffectiveLatestChange|PeriodOfServiceId(SourceSystemId)|PersonId(SourceSystemId)|PersonNumber|LegalEmployerName|DateStart|AssignmentName|AssignmentStatusTypeCode|AssignmentType|BusinessUnitShortCode|PositionOverrideFlag|PrimaryWorkTermsFlag|ActionCode|WorkerType|PersonTypeCode|SystemPersonType|SourceSystemOwner|SourceSystemId
MERGE|WorkTerms||ET10090|2002/10/07|2008/12/07|1|Y|283_WORK_RELATIONSHIP|1190_PERSON||ABC Limited|2002/10/07||ACTIVE_PROCESS|ET|ABC Business Unit|||HIRE|E|Employee|EMP|HRC_SQLLOADER|283_WORK_TERM
MERGE|WorkTerms||ET10090|2008/12/08|4712/12/31|1|Y|283_WORK_RELATIONSHIP|1190_PERSON||ABC Limited|2002/10/07||INACTIVE_PROCESS|ET|ABC Business Unit|||TERMINATION||||HRC_SQLLOADER|283_WORK_TERM
MERGE|WorkTerms||ET10090-2|2015/02/02|2017/06/30|1|Y|101062_WORK_RELATIONSHIP|1190_PERSON||ABC Limited|2015/02/02||ACTIVE_PROCESS|ET|ABC Business Unit|||ASG_CHANGE|E|Employee|EMP|HRC_SQLLOADER|12608_WORK_TERM
MERGE|WorkTerms||ET10090-2|2017/07/01|4712/12/31|1|Y|101062_WORK_RELATIONSHIP|1190_PERSON||ABC Limited|2015/02/02||ACTIVE_PROCESS|ET|ABC Business Unit|||ASG_CHANGE|E|Employee|EMP|HRC_SQLLOADER|12608_WORK_TERM
METADATA|Assignment|ActionCode|EffectiveStartDate|EffectiveEndDate|EffectiveSequence|EffectiveLatestChange|WorkTermsAssignmentId(SourceSystemId)|WorkTermsNumber|AssignmentType|AssignmentName|AssignmentNumber|AssignmentStatusTypeCode|BusinessUnitShortCode|DateProbationEnd|WorkerCategory|AssignmentCategory|PermanentTemporary|FullPartTime|InternalFloor|GradeCode|JobCode|LocationCode|NormalHours|Frequency|DepartmentName|PeriodOfServiceId(SourceSystemId)|PersonId(SourceSystemId)|PositionCode|PrimaryAssignmentFlag|PrimaryFlag|PersonTypeCode|SystemPersonType|EstablishmentId|NoticePeriod|NoticePeriodUOM|ProbationPeriod|ProbationUnit|HourlySalariedCode|LabourUnionMemberFlag|ManagerFlag|EndTime|StartTime|WorkAtHomeFlag|SourceSystemOwner|SourceSystemId
MERGE|Assignment|HIRE|2002/10/07|2008/12/07|1|Y|283_WORK_TERM||E||E10090|ACTIVE_PROCESS|ABC Business Unit||||||||||||ABC Limited|283_WORK_RELATIONSHIP|1190_PERSON||Y|Y|Employee|EMP||||||||||||HRC_SQLLOADER|283_ASSIGNMENT
MERGE|Assignment|TERMINATION|2008/12/08|4712/12/31|1|Y|283_WORK_TERM||E||E10090|INACTIVE_PROCESS|ABC Business Unit|||||||72|128|343|||ABC Maintenance|283_WORK_RELATIONSHIP|1190_PERSON||Y|Y||||||||||||||HRC_SQLLOADER|283_ASSIGNMENT
MERGE|Assignment|REHIRE|2015/02/02|2017/06/30|1|Y|12608_WORK_TERM||E||E10090-2|ACTIVE_PROCESS|ABC Business Unit|||||||67|173|142|||ABC Department|101062_WORK_RELATIONSHIP|1190_PERSON|165061|Y|Y|Employee|EMP|||||||N|N||||HRC_SQLLOADER|12608_ASSIGNMENT
MERGE|Assignment|ASG_CHANGE|2017/07/01|4712/12/31|1|Y|12608_WORK_TERM||E||E10090-2|ACTIVE_PROCESS|ABC Business Unit|||||||67|173|142||ABC Department|101062_WORK_RELATIONSHIP|1190_PERSON|627106|Y|Y|Employee|EMP|||||||N|N||||HRC_SQLLOADER|12608_ASSIGNMENT
METADATA|PersonLegislativeData|PersonLegislativeId|EffectiveStartDate|EffectiveEndDate|PersonId(SourceSystemId)|LegislationCode|HighestEducationLevel|MaritalStatus|MaritalStatusDate|Sex|GUID|SourceSystemOwner|SourceSystemId
MERGE|PersonLegislativeData||2002/10/07|4712/12/31|1190_PERSON|US||M||M||HRC_SQLLOADER|1190_PERSON_LEGISLATIVE
Use any of the below formats of Organization HDL to update descriptive flexfield attribute:
Using Surrogate IDs:
METADATA|Organization|OrganizationId|EffectiveStartDate|EffectiveEndDate|FLEX:PER_ORGANIZATION_UNIT_DFF|testAttribute(PER_ORGANIZATION_UNIT_DFF=Global Data Elements)
MERGE|Organization|3000000001230139|1951/01/01|4712/12/31|Global Data Elements|Xyz
METADATA|OrgUnitClassification|OrgUnitClassificationId|OrganizationId|EffectiveStartDate|EffectiveEndDate
MERGE|OrgUnitClassification|3000000001233056|3000000001230139|1951/01/01|4712/12/31
Using User Keys:
METADATA|Organization|Name|ClassificationName|EffectiveStartDate|EffectiveEndDate|FLEX:PER_ORGANIZATION_UNIT_DFF|testAttribute(PER_ORGANIZATION_UNIT_DFF=Global Data Elements)
MERGE|Organization|Test Organization|Department|1951/01/01|4712/12/31|Global Data Elements|Xyz
METADATA|OrgUnitClassification|OrganizationName|ClassificationName|SetCode|EffectiveStartDate|EffectiveEndDate
MERGE|OrgUnitClassification|Test Organization|Department|COMMON|1951/01/01|4712/12/31
HDL supports deletion of element eligibilities. You can make use of PayrollElementDefinition business object. Child object ElementEligibility supports create, update and delete operations.
You can create a simple BIP query to extract the requisite data and create an HDL file out of it:
select pelf.element_link_id
,petf.element_type_id
,petf.base_element_name
,pelf.effective_start_date
from pay_element_types_f petf
, pay_element_links_f pelf
where petf.element_type_id = pelf.element_type_id
and element_link_id=300000175215375
and TRUNC(sysdate) BETWEEN petf.effective_start_date and petf.effective_end_date
Sometime you may get an error that You can’t delete element eligibility as element entries exist for this eligibility. In this case first you need to identify the corresponding element entries and Delete those and then retry deleting element eligibility.
Please check mos note – When Attempting to Delete Element Eligibility Get Error ‘The element eligibility record can’t be deleted because it would invalidate existing element entries with effective start dates in the future. (PAY-1635756)’ (Doc ID 2686914.1) for sample query and sample file for element entry deletion.
In rare cases, while uploading the HDL file from Data exchange results in HRC-1035375 – A connection to the WebCenter Content Server Couldn’t be established.
Resolution to this error is to raise an SR with Oracle support and ask them for OID services bounce.
Many a time there is a requirement to update/change the existing actions and actions reasons usages. Doing this manually in UI will become a cumbersome process for multiple actions and may lead to human errors as well.
For this, HCM Data Loader can used to update the changes using Actions.dat file.
Use the below SQL queries to extract Actions and Action Reason Usages data from your Pod in HDL format:
Actions:
Select 'MERGE|Actions|'
||
actb.action_code
||'|'||
actt.action_name
||'|'||
actb.action_type_code
||'|'||
to_char(actb.start_date, 'yyyy/mm/dd')
||'|'||
to_char(actb.end_date, 'yyyy/mm/dd')
||'|'||
map.source_system_id
||'|'||
map.source_system_owner ACTIONS_HDL
from
PER_ACTIONS_B actb,
PER_ACTIONS_TL actt,
hrc_integration_key_map map
where 1=1
and actb.action_id = actt.action_id
and actt.language = USERENV('LANG')
and actb.action_id = map.surrogate_id
Action Reason Usage:
Select 'MERGE|ActionReasonUsage|'
||
aru.action_code
||'|'||
aru.action_reason_code
||'|'||
to_char(aru.start_date, 'yyyy/mm/dd')
||'|'||
to_char(aru.end_date, 'yyyy/mm/dd')
||'|'||
km.source_system_id
||'|'||
km.source_system_owner ARC_HDL
from
hrc_integration_key_map km,
PER_ACTION_REASON_USAGES aru
where 1=1
and aru.ACTION_REASON_USAGE_ID = km.surrogate_id
Copy and save the data as Actions.dat and do the required changes.
HCM Data Loader object ClassroomResource can be used to bulk upload classroom resources in Oracle Learning Cloud. Existing locations created as part of Global HR can also be designated as classroom resources.
Below is a sample file to upload classroom resources using HDL:
Goal Weightage can be updated using GoalMeasurement metadata in Goal business object. First, we need to extract the uploaded measurements. Use the below query to extract the details:
Often in Cloud HCM, we encounter situation(s) where we need to update some information at worker assignment, post worker creation as this information was not available at the time of hiring an worker. One such example could be Employee category. Let’s take a hypothetical example, employee category should be auto populated based on worker Job. As, there is no direct link between employee category and job, so it becomes a pain to manually search and put the correct employee category while hiring. So, in this case, the worker is hired with Job with no value for employee category.
A DFF is opened at Job level which store the corresponding employee category. So, in this case we design a solution which will:
Read the worker job and then the corresponding employee category from Job.
Generate the data for WorkTerms and Assignments METADATA in HCM Data Loader Format.
HCM Extract to consume the data and trigger HDL Import and Load Process.
Schedule HCM Extract to run daily or depending upon the requirement.
Once, HCM Extract is run, employee category will populated automatically.
Steps to design the integration:
Extract the Workterms and assignment data for all workers where the job is populated and employee category is NULL.
Create a BIP publisher report to organize the data extracted in Step 1 in HCM Data Loader format. Copy the Global Reports Data Model (from path /Shared Folders/Human Capital Management/Payroll/Data Models/globalReportsDataModel) to a folder in /Shared Folders/Custom/HR. This folder can be anything as per your nomenclature specifications.
Add a new data set in the globalReportsDataModel and paste your query in the new data set.
Download the above file. Change the extension to xml.
Open the xml file with Notepad or Notepad++ and remove first two rows (these rows were added to make sure the file is uploaded here).
Navigate to My Client Groups -> Data Exchange -> HCM Extracts -> Extract Definitions:
Click on Import to import the xml file
Provide an Extract name. Uncheck the Changes Only checkbox and click on Ok:
Once the extract Import is complete, Click on pencil icon to edit:
Click on ‘Extract Delivery Option’ in navigation Tree on left side. And on the right side, Under ‘Extract Delivery Options’ click on edit to update the path of your report as created earlier. It should like – /Custom/HR/AssignmentUpdateRPT.xdo
Make sure default value for parameter Auto Load is set “Y”.
Save the details. Click on Extract Execution Tree next and Click All Formula:
Once the formulas are complied, then click on Submit button.
The next step is to refine the extract in order to Submit the Import and Load process:
Navigate to My Client Groups -> Data Exchange -> HCM Extracts -> Refine Extracts. Search the extract and click on edit.
Select and Add – Initiate HCM Data Loader process
Click on Go Task for “Initiate HCM Data Loader” and Click Edit for “ Data Loader Archive Action” and add the relevant parameters:
Parameter Basis – Bind to Flow Task Basis Value – XX Assignment Update Integration, Submit , Payroll Process
Click Edit for “Data Loader Configurations” add relevant parameters
Parameter Basis – Constant Bind Basis Value -ImportMaximumErrors=100,LoadMaximumErrors=100,LoadConcurrentThreads=8,LoadGroupSize=100
Task sequence should look as follows:
Go to Review and click on Submit.
Your extract is now ready for submission. You can submit the extract and test it.
Actions and Action Reasons are very important part of any Cloud HCM implementation. Oracle provides a large number of actions and action reasons out of the box. But if needed additional actions and action reasons can be created from UI as well as using HDL.
Each action is tied with an action type. Please note that action types are seeded and can’t be created. You can create a custom action and attach existing action reasons to it. The details are stored in PER_ACTION_REASON_USAGES table.
During implementation, there is a common requirement to delete some of the unwanted action reason usages which were created initially and are no longer required. In such cases finding each reason and deleting it from action is quite a painful task.
This can be achieved easily using HCM Data Loader.
Run the below query in BIP and save the file as Actions.dat. Zip the file and kick Import and Load HCM Data Loader process. You can modify the extract criteria as per your requirement:
SELECT data
FROM (
SELECT 'METADATA|ActionReasonUsage|ActionCode|ActionReasonCode|StartDate|EndDate|SourceSystemId|SourceSystemOwner' data, 1 DATA_SEQ
FROm DUAL
UNION ALL
Select 'DELETE|ActionReasonUsage|'
||
paru.action_code
||'|'||
paru.action_reason_code
||'|'||
to_char(paru.start_date, 'yyyy/mm/dd')
||'|'||
to_char(paru.end_date, 'yyyy/mm/dd')
||'|'||
hikm.source_system_id
||'|'||
hikm.source_system_owner data, 2 DATA_SEQ
from
hrc_integration_key_map hikm,
PER_ACTION_REASON_USAGES paru
where 1=1
and paru.ACTION_REASON_USAGE_ID = hikm.surrogate_id
and paru.created_by <> 'SEED_DATA_FROM_APPLICATION'
)
ORDER BY DATA_SEQ
You can make use of HCM Data Loader to upload Succession Plans (Incumbent, Job and Position Types).
Use the below sample file to load Succession Plans:
METADATA|SuccessionPlan|PlanName|PlanType|Status|AccessTypeCode|Description|IncumbentPersonNumber|JobCode|JobSetCode|DepartmentName|DepartmentSetCode|GradeSetCode|GradeCode|PositionCode|BusinessUnitName|SourceSystemId|SourceSystemOwner
MERGE|SuccessionPlan|Test_Person_Succession_Plan|INCUMBENT|ACTIVE|PUBLIC|Sample description for Incument Plan Type.|111222|||||||||HRC_SQLLOADER_Test_Person_Succession_Plan_INCUMBENT|HRC_SQLLOADER
MERGE|SuccessionPlan|Test_Job_Succession_Plan|JOB|ACTIVE|PUBLIC|Sample description for Job Plan Type.||XX052|COMMON|Test HR Department|COMMON|COMMON|GRD_90||XX BU|HRC_SQLLOADER_Test_Job_Succession_Plan_JOB|HRC_SQLLOADER
MERGE|SuccessionPlan|Test_Position_Succession_Plan|POSITION|ACTIVE|PUBLIC|Sample description for Position Plan Type.||||Test HR Department|COMMON|||POS_0301|COMMON|HRC_SQLLOADER_Test_Position_Succession_Plan_POSITION|HRC_SQLLOADER
METADATA|SuccessionPlanOwner|PlanName|PlanOwnerPersonNumber|OwnerTypeCode|SourceSystemId|SourceSystemOwner
MERGE|SuccessionPlanOwner|Test_Person_Succession_Plan|99997|ADMINISTRATOR|HRC_SQLLOADER_ADMINISTRATOR_99997_Test_Person_Succession_Plan|HRC_SQLLOADER
MERGE|SuccessionPlanOwner|Test_Job_Succession_Plan|99997|ADMINISTRATOR|HRC_SQLLOADER_ADMINISTRATOR_99997_Test_Job_Succession_Plan|HRC_SQLLOADER
MERGE|SuccessionPlanOwner|Test_Position_Succession_Plan|99997|ADMINISTRATOR|HRC_SQLLOADER_ADMINISTRATOR_99997_Test_Position_Succession_Plan|HRC_SQLLOADER
MERGE|SuccessionPlanOwner|Test_Position_Succession_Plan|99998|ADMINISTRATOR|HRC_SQLLOADER_ADMINISTRATOR_99998_Test_Position_Succession_Plan|HRC_SQLLOADER
Once the data is loaded, you can run the query to check loaded plans in HRM_PLANS table.