There are cases where you need to run HCM Extracts on different schedules which are not available in standard schedule. For example, we need to run an HCM Extracts after 6 hours. In this case, a custom flow schedule fast formula can be created.
Sample FF to run HCM Extracts at 4 AM, 10 AM, 4 PM and 10 PM:
Succession Plans data is stored in HRM_PLANS table. A succession plan can a PUBLIC or PRIVATE plan. By default if any user accesses HRM_PLANS table, user will be able to see all plans data irrespective of plan type (public or private). The requirement here is that a person who is querying data should be able to see all PUBLIC succession plans and only those PRIVATE plans which are defined by the user.
This can be achieved using below SQL:
select * from HRM_PLANS hp
where hp.ACCESS_TYPE_CODE = 'PUBLIC'
UNION
select * from HRM_PLANS hp
where hp.ACCESS_TYPE_CODE = 'PRIVATE'
and exists (select 1 from HRM_PLAN_OWNERS hpo
where person_id = hrc_session_util.get_user_personid
and hpo.plan_id = hp.plan_id)
You can pass multiple comma separated values in an HCM Extract parameter and then use below logic to separate the comma separated values into multiple values:
SELECT *
FROM (SELECT trim(regexp_substr(param_person_number,'[^, ]+',1,LEVEL))
FROM (SELECT pay_report_utils.get_parameter_value('PER_NUMBER') param_person_number
FROM DUAL)
CONNECT BY regexp_substr(param_person_number,'[^, ]+', 1,LEVEL)>0)
SELECT pcafv.collective_agreement_name
,TO_CHAR(pcafv.effective_start_date,'YYYY/MM/DD') Start_Date
,pcafv.legislation_code
,pcafv.status
,pcafv.identification_code
,pcafv.description
,pcafv.comments
,houf_union.name Union_Name
,pcafv.bargaining_unit_code
,ple.name Legal_Employer
,pcafv.employee_org_name
,pcafv.employee_org_contact
,pcafv.employer_org_name
,pcafv.employer_org_contact
FROM per_col_agreements_f_vl pcafv
,hr_all_organization_units_f_vl houf_union
,per_legal_employers ple
WHERE pcafv.legal_entity_id = ple.organization_id(+)
AND pcafv.union_id = houf_union.organization_id(+)
AND trunc(SYSDATE) BETWEEN pcafv.effective_start_date AND pcafv.effective_end_date
AND trunc(SYSDATE) BETWEEN houf_union.effective_start_date(+) AND houf_union.effective_end_date(+)
AND trunc(SYSDATE) BETWEEN ple.effective_start_date(+) AND ple.effective_end_date(+)
Calendar events are used in HCM to capture public holidays available to employees based on their location or department.
Below SQL queries can be used to extract the list of configured calendar events from backend table:
List of all calendar events configured:
SELECT pcetl.name
,pce.short_code
,pcetl.description
,pce.category
,pce.coverage_type
,pce.start_date_time
,pce.end_date_time
,pce.tree_code
,pce.tree_structure_code
FROM per_calendar_events pce,
per_calendar_events_tl pcetl
WHERE pce.calendar_event_id=pcetl.calendar_event_id
AND pcetl.LANGUAGE=userenv('LANG')
List of calendar events by location:
SELECT hla.location_id
,hla.location_name
,hla.internal_location_code
,pcet.name
,pce.short_code
,pcet.description
,pce.category
,pce.coverage_type
,pce.start_date_time
,pce.end_date_time
,pce.tree_code
,pce.tree_structure_code
FROM hr_locations_all hla
,per_calendar_events pce
,per_calendar_events_tl pcet
,per_cal_event_coverage pcec
,per_geo_tree_node_rf pgtnr
WHERE hla.geo_hierarchy_node_value = pgtnr.ancestor_pk1_value
AND pgtnr.tree_structure_code = 'PER_GEO_TREE_STRUCTURE'
AND pgtnr.distance in (0)
AND pgtnr.ancestor_tree_node_id = pcec.tree_node_id
AND pcec.calendar_event_id = pcet.calendar_event_id
AND pcet.calendar_event_id = pce.calendar_event_id
AND pcet.language = userenv('lang')
AND pcec.coverage_flag in ('I')
AND trunc(sysdate) between hla.effective_start_date and hla.effective_end_date
Oracle provides HCMGroup.dat which can be used to assign members against HCM Group. However, the business object has a major limitation (till release version 22A) that it can be used only for below two seeded Oracle HCM Groups:
Manually Maintained Excluded Members
Manually Maintained Audit Members
Below is the sample HDL file to add members to Audit HCM Group:
For any custom HCM Groups, this can’t be used. If you try to use any custom Group Code in the HDL file, you will receive below error:
Valid values for attribute GROUP CODE: MANUALLY MAINTAINED AUDIT MEMBERS,MANUALLY MAINTAINED EXCLUDED MEMBERS.
Query to extract members of an HCM Group:
SELECT hgt.group_name
,papf.person_id
,papf.person_number
,ppnf.full_name
,ppnf.last_name
,ppnf.first_name
,hgmf.effective_start_date
,hgmf.effective_end_date
FROM PER_ALL_PEOPLE_F papf,
PER_PERSON_NAMES_F ppnf,
HWM_GRP_MEMBERS_F hgmf,
HWM_GRPS_B hgb,
HWM_GRPS_TL hgt
WHERE hgb.grp_id = hgmf.grp_id
AND hgb.grp_id = hgt.grp_id
AND hgt.language = 'US'
AND papf.person_id = hgmf.member_id
AND papf.person_id = ppnf.person_id
AND ppnf.name_type ='GLOBAL'
AND trunc(sysdate) BETWEEN hgmf.effective_start_date and hgmf.effective_end_date
AND trunc(sysdate) BETWEEN ppnf.effective_start_date and ppnf.effective_end_date
ORDER BY 1,3
Criticality at position model profile can be loaded using TalentProfile.dat file. Use the below HDL file:
METADATA|TalentProfile|ProfileCode|ProfileId|ProfileTypeId|ProfileUsageCode|SourceSystemOwner|SourceSystemId|ProfileStatusCode|Description|Summary
MERGE|TalentProfile|TEST_POS_PROFILE_3||3|M|HRC_SQLLOADER|TEST_POS_PROFILE_3|A|A Test Position Profile 3|A Test Position Profile 3 Summary
METADATA|ProfileItem|ProfileCode|ContentType|RatingLevelId1|ItemText3015|SourceSystemOwner|SourceSystemId|DateFrom|SectionId
MERGE|ProfileItem|TEST_POS_PROFILE_3|CRITICALITY||N|HRC_SQLLOADER|TEST_POS_PROFILE_3_CRIT|2022/01/01|8809
ItemText3015 – “Requires Succession Plan” radio box on UI
Screen shot from UI of uploaded data:
Useful queries:
Uploaded data will be stored in HRT_PROFILE_ITEMS.
Content Type –
select * from HRT_CONTENT_TYPES_TL where content_type_id=127
Section ID –
select *
from HRT_PROFILE_TYP_SECTIONS
WHERE CONTENT_TYPE_ID=127
AND SECTION_CONTEXT = 'POSITION_CRITICALITY'
Loading the profile data against a worker/job/position is little tricky one. One has to understand the templates thoroughly before uploading any data.
Below are some of the most commonly used queries to extract some of the required information for loading Talent profiles data.
Extract Person Profile Code:
SELECT distinct papf.person_id, papf.person_number,ProfilePEO.profile_code
FROM HRT_PROFILES_VL ProfilePEO, PER_ALL_PEOPLE_F papf
WHERE papf.person_id = ProfilePEO.person_id
AND TRUNC(SYSDATE) BETWEEN papf.effective_start_date and papf.effective_end_date
METADATA|SuccessionPlan|PlanName|PlanType|Status|AccessTypeCode|Description|IncumbentPersonNumber|JobCode|JobSetCode|DepartmentName|DepartmentSetCode|GradeSetCode|GradeCode|PositionCode|BusinessUnitName|SourceSystemId|SourceSystemOwner
MERGE|SuccessionPlan|Test_Person_Succession_Plan|INCUMBENT|ACTIVE|PUBLIC|Sample description for Incument Plan Type.|100019|||||||||HRC_SQLLOADER_Test_Person_Succession_Plan_INCUMBENT|HRC_SQLLOADER
MERGE|SuccessionPlan|Test_Job_Succession_Plan|JOB|ACTIVE|PUBLIC|Sample description for Job Plan Type.||JOB_33052|COMMON|Test Department|COMMON|COMMON|82||COMMON|HRC_SQLLOADER_Test_Job_Succession_Plan_JOB|HRC_SQLLOADER
MERGE|SuccessionPlan|Test_Position_Succession_Plan|POSITION|ACTIVE|PUBLIC|Sample description for Position Plan Type.||||Test Department|COMMON|||POS_0030|COMMON|HRC_SQLLOADER_Test_Position_Succession_Plan_POSITION|HRC_SQLLOADER
METADATA|SuccessionPlanCandidate|PlanName|PlanCandidatePersonNumber|CandidateRanking|CandidateReadiness|EmergencySuccessor|Status|SourceSystemId|SourceSystemOwner
MERGE|SuccessionPlanCandidate|Test_Position_Succession_Plan|100019|2|READY_NOW|Y|ACTIVE|HRC_SQLLOADER_Y_Test_Position_Succession_Plan_100019|HRC_SQLLOADER
METADATA|SuccessionPlanOwner|PlanName|PlanOwnerPersonNumber|OwnerTypeCode|SourceSystemId|SourceSystemOwner
MERGE|SuccessionPlanOwner|Test_Person_Succession_Plan|800012|ADMINISTRATOR|HRC_SQLLOADER_ADMINISTRATOR_800012_Test_Person_Succession_Plan|HRC_SQLLOADER
MERGE|SuccessionPlanOwner|Test_Job_Succession_Plan|800012|ADMINISTRATOR|HRC_SQLLOADER_ADMINISTRATOR_800012_Test_Job_Succession_Plan|HRC_SQLLOADER
MERGE|SuccessionPlanOwner|Test_Position_Succession_Plan|800012|ADMINISTRATOR|HRC_SQLLOADER_ADMINISTRATOR_800012_Test_Position_Succession_Plan|HRC_SQLLOADER
MERGE|SuccessionPlanOwner|Test_Position_Succession_Plan|897132|ADMINISTRATOR|HRC_SQLLOADER_ADMINISTRATOR_104990_Test_Position_Succession_Plan|HRC_SQLLOADER
For many BIP reports, it is required to publish the environment url. One can easily hardcode it in SQL query, however, when P2T or T2T happens, the same url gets copied to target instance which is incorrect. The better way of doing it is to use below piece of code in custom SQL to extract environment url dynamically:
SELECT 'https://'||EXTERNAL_VIRTUAL_HOST INSTANCE_NAME
FROM FUSION.ASK_DEPLOYED_DOMAINS
WHERE DEPLOYED_DOMAIN_NAME = 'FADomain'
Person External Identifier information can be deleted easily using HDL. Under Worker use ExternalIdentifier to delete this data.
Run the below query to extract required data in HDL format:
SELECT 'METADATA|ExternalIdentifier|ExternalIdentifierId|PersonId' HEADER, 1 DATA_ROW
FROM DUAL
UNION
SELECT 'DELETE|ExternalIdentifier|'
||peai.EXT_IDENTIFIER_ID
||'|'||peai.PERSON_ID DATAR, 2 DATA_ROW
FROM PER_EXT_APP_IDENTIFIERS peai
,PER_ALL_PEOPLE_F papf
WHERE peai.person_id = papf.person_id
AND TRUNC(SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
AND papf.person_number IN ('11','12')
SELECT csb.name
,csb.salary_basis_code
,csb.salary_annualization_factor
,pld.name LDG
,petft.element_name
,pivt.name input_value_name
,rate.name rate_name
FROM cmp_salary_bases csb,
per_legislative_data_groups_tl pld,
pay_element_types_tl petft,
pay_input_values_f pivf,
pay_input_values_tl pivt,
per_rates_f_tl rate
WHERE 1=1
and csb.legislative_data_group_id = pld.legislative_data_group_id
and pld.language = USERENV('LANG')
and csb.element_type_id = petft.element_type_id
-- and trunc(sysdate) between petft.effective_start_date and petft.effective_end_date
and petft.language = USERENV('LANG')
and csb.element_type_id = pivf.element_type_id
and csb.input_value_id = pivf.input_value_id
and trunc(sysdate) between pivf.effective_start_date and pivf.effective_end_date
and pivf.input_value_id = pivt.input_value_id
and pivt.language = USERENV('LANG')
and csb.grade_rate_id = rate.rate_id(+)
and NVL(rate.language,USERENV('LANG')) = USERENV('LANG')
ORDER BY 1
There may be a requirement to delete salary data while doing HDL iterations for the first time. You can use below query for the same purpose
Select 'METADATA|Salary|AssignmentNumber|SalaryAmount|DateFrom|DateTo|SalaryBasisId|SalaryId' Header, 1 data_flow_order
from dual
UNION
SELECT 'DELETE|Salary'||'|'||
paam.assignment_number||'|'||
SALARY_AMOUNT||'|'||
TO_CHAR(cs.date_from,'RRRR/MM/DD', 'nls_date_language=American')||'|'||
TO_CHAR(cs.date_to,'RRRR/MM/DD', 'nls_date_language=American')||'|'||
cs.salary_basis_id||'|'||
cs.salary_id data_row,
2 data_flow_order
FROM cmp_salary cs,
per_all_assignments_m paam
WHERE cs.assignment_id= paam.assignment_id
AND trunc(sysdate) between paam.effective_start_date AND paam.effective_end_date
AND paam.assignment_type in ('E', 'C', 'P')
AND trunc(cs.creation_date) < trunc(sysdate)-1
ORDER BY data_flow_order
In case, you need to delete assigned payroll data for masking purpose in lower environment or for some other reason like correction etc, use below query to pull the data in HDL format:
SELECT 'METADATA|AssignedPayroll|AssignedPayrollId|EffectiveStartDate|AssignmentId|LegislativeDataGroupId|PayrollId|PayrollTermId|SourceSystemId|SourceSystemOwner' HEADER, 1 DATA_ORDER
FROM DUAL
UNION ALL
SELECT 'DELETE|AssignedPayroll'
||'|'||
papd.ASSIGNED_PAYROLL_ID
||'|'||
to_char(papd.start_date,'RRRR/MM/DD')
||'|'||
paam.assignment_id
||'|'||
papd.LEGISLATIVE_DATA_GROUP_ID
||'|'||
papd.Payroll_Id
||'|'||
papd.Payroll_Term_Id
||'|'||
hikm.source_system_id
||'|'||
hikm.source_system_owner HEADER, 2 DATA_ORDER
FROM pay_assigned_payrolls_dn papd
,pay_payroll_terms ppt
,pay_pay_relationships_dn prrd
,pay_all_payrolls_f papf_pay
,pay_rel_groups_dn prgd
,pay_rel_groups_f prgf
,per_all_people_f papf
,per_person_names_f ppnf
,per_all_assignments_m paam
,hrc_integration_key_map hikm
WHERE papd.payroll_term_id = ppt.payroll_term_id
AND ppt.payroll_relationship_id = prrd.payroll_relationship_id
AND papd.payroll_id = papf_pay.payroll_id
AND prrd.person_id = papf.person_id
AND prrd.payroll_relationship_id = prgd.payroll_relationship_id
AND prgd.relationship_group_id = prgf.relationship_group_id
AND prgd.assignment_id = paam.assignment_id
AND paam.effective_latest_change = 'Y'
AND prgd.group_type = 'A'
AND ppnf.person_id = papf.person_id
AND ppnf.name_type = 'GLOBAL'
AND TRUNC(SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
AND TRUNC(SYSDATE) BETWEEN ppnf.effective_start_date AND ppnf.effective_end_date
AND TRUNC(SYSDATE) BETWEEN prgf.effective_start_date AND prgf.effective_end_date
AND TRUNC(SYSDATE) BETWEEN paam.effective_start_date AND paam.effective_end_date
AND hikm.surrogate_id = papd.ASSIGNED_PAYROLL_ID
AND trunc(papd.creation_date) < trunc(sysdate)-1
--AND papf.person_number = '11111'
ORDER BY 2
For Position Model profiles, there are additional attributes like Description, Qualification and Responsibilities. To load these, one should make use of ModelProfileExtraInfo child business object of TalentProfile.dat.
Attributes values for Description, Qualification and Responsibilities should be stored in txt files. The txt file(s) should then be placed under the ClobFiles folder and should be zipped together with .dat file.
Sample HDL File to load position model profile extra info:
METADATA|TalentProfile|ProfileCode|ProfileId|ProfileTypeId|ProfileUsageCode|SourceSystemOwner|SourceSystemId|ProfileStatusCode|Description|Summary
MERGE|TalentProfile|TEST_POS_PROFILE_2||3|M|HRC_SQLLOADER|TEST_POS_PROFILE_2|A|A Test Position Profile2|A Test Position Profile 2
METADATA|ModelProfileExtraInfo|ProfileExtraInformationId|Description|ProfileId|Qualifications|Responsibilities|ProfileCode|SourceSystemId|SourceSystemOwner
MERGE|ModelProfileExtraInfo||Test_Desc.txt||Test_Qual.txt|Test_Resp.txt|TEST_POS_PROFILE_2|TEST_POS_PROFILE_2_MFEI|HRC_SQLLOADER
TalentProfile.dat can be used to create worker talent profiles and loading content items against the talent profiles. While some of the content items load is straight forward, loading competencies is little tricky one. This is because there are some additional fields like QualifierId1 and QualifierId2 which should be supplied to make the loaded data available on UI.
select QUALIFIER_ID,DESCRIPTION
from FUSION.HRT_QUALIFIERS_VL
where QUALIFIER_SET_ID in (
select QUALIFIER_SET_ID from fusion.HRT_QUALIFIER_SETS_VL where SECTION_ID = 300000236137110)
Sample data from above query:
By default on UI, View By is set to “Official”:
Navigation :- Person Management -> Talent Profile
So, if you pass QualifierId1 as something different from the Id for “Official”, the loaded data will not be visible. Make sure to set “View By” to All.
Below is the high level description of some of the main attributes of ProfileItem business object:
ProfileId – profile Id of the person. If you don’t have it, pass it blank and pass the value under ProfileCode.
ProfileCode – Use this if you don’t know ProfileId (above).
DateFrom – From Date
RatingLevelCode2 – Pass values 1,2,3,4 or 5 based on your rating level (under rating model you can see).
SourceSystemId – Pass same as Profile Id or Profile Code
ItemDate6 – Review Date
ContentItem – Competency Name
QualifierId1 – Qualifier_id for an appropriate evaluator type. Can be used from the query above.
QualifierId2 – Person Id corresponding to evaluator type. For example, if using qualifier_id for Self, pass person_id of the worker in qualifierId2.
HCM Extracts with inbound delivery are one of the most powerful technical features of the Fusion HCM cloud. HCM Extracts with inbound delivery enables the technical consultants to do customizations and automations by making use of tools like BIP, HCM Extracts and HCM Data Loader.
You can follow the below post for more details on the topic:
One has to take care a number of things when designing the Inbound HCM Extracts or the Loopback extract, otherwise you may get variety of errors.
One of the most commonly encountered error is:
An assertion failure was detected at location BatchProcArchivePA.getRepMapId:2. error while running
There can be multiple reasons to this error. To resolve this, please check below:
Are you using a BIP report to generate the output? If yes, can you please check if permissions are assigned to the report and data model (to FUSION%APPS%HCM%ESS%APPID)?
Please download the sample output from data model and check if you can generate a sample output using BI Publisher plugin.
Check if report path is correct if you are using.
Also, please check if you are able to deliver the output to UCM.
Check if the extract parameters are correct and the related value sets/ fast formula are present in the environment.
This is second post in a series to extract Rating Model data from Fusion environment in HDL format. The first post has a query to extract only the rating model data and second post describes the BIP query to extract rating model levels data.