Post refresh, the attachments from source environment gets copied to target environment. For data security, it is highly recommended to purge the attachments in lower environments. Oracle’s standard data masking feature doesn’t remove any attachments from the UCM.
In order to overcome this problem, I have created a BIP report which will pull details of all document records for all worker types having an attachment. The BIP generates data in HDL format, which can be saved as DocumentsOfRecord.dat.
Sample BIP data model query: (Below query can be filter based on document type or person number):
SELECT a.DATAROW
FROM
(
SELECT 'METADATA|DocumentAttachment|PersonNumber|DocumentTypeId|DocumentType|DocumentCode|DataTypeCode|Title|FileName' "DATAROW"
,1 row_num
FROM DUAL
UNION
SELECT 'DELETE|DocumentAttachment'||'|'||
papf.person_number||'|'||
hdor.document_type_id||'|'||
hdtt.document_type||'|'||
hdor.document_code||'|'||
fd.datatype_code||'|'||
fdt.title||'|'||
fdt.file_name "DATAROW"
,2 row_num
FROM per_all_people_f papf,
hr_documents_of_record hdor,
hr_document_types_tl hdtt,
fnd_attached_documents fad,
fnd_documents_tl fdt,
fnd_documents fd
WHERE 1=1
AND hdor.person_id = papf.person_id
AND hdor.documents_of_record_id = fad.pk1_value
AND fad.document_id = fdt.document_id
AND fd.document_id = fdt.document_id
AND fdt.language = 'US'
AND fad.entity_name = 'HR_DOCUMENTS_OF_RECORD'
AND hdor.document_type_id = hdtt.document_type_id
AND hdtt.language = 'US'
--AND papf.person_number = '269628'
--AND hdtt.document_type = 'Passport Info'
AND trunc(sysdate) between papf.effective_start_date and papf.effective_end_date
ORDER BY row_num
) a
There are many legislative required attributes defined against location object in HCM. These attributes are setup as EFF in Oracle fusion. These EFF attributes are protected against any updates.
One of the requirements is to bulk upload data against these EFF attributes and HDL can be used for this.
Below is a sample HDL to load “HR Reporting Location” attribute under “United States EEO and Veteran Reporting Information”:
LleInformationCategory – Should be same as EFF Context – HRX_US_LOC_EEO_VETS_INF
In case any of these attributes are not passed correctly, you will get an error:
An error occurred. To review details of the error run the HCM Data Loader Error Analysis Report diagnostic test.
Message details: JBO-26037: Cannot find matching EO from discriminator columns for view object LocationLegislativeHRX_5FUS_5FLOC_5FEEO_5FVETS_5FINFprivateVOLogical, entity base LocationLegislativeHRX_5FUS_5FLOC_5FEEO_5FVETS_5FINFprivateEO, discr value Discr values: HCM_LOC_LEG..
For US legislation, Reporting Information calculation card is mandatory calculation card.
CalculationCard.dat HDL is used to upload the Calculation Component Details:
In this example, we will prepare a sample HDL file to upload “Corporate Officer” component value and set it to “Not a Corporate Officer”.
To create a calculation component details, below three Metadata’s should be used:
CalculationCard
CardComponent
ComponentDetail
Below is the worked out example for same:
METADATA|CalculationCard|EffectiveStartDate|LegislativeDataGroupName|DirCardDefinitionName|CardSequence|AssignmentNumber
MERGE|CalculationCard|2024/11/01|United States|Reporting Information|1|E303510
METADATA|CardComponent|CardSequence|ComponentSequence|AssignmentNumber|EffectiveStartDate|EffectiveEndDate|DirCardDefinitionName|LegislativeDataGroupName|DirCardCompDefName
MERGE|CardComponent|1|1|E303510|2024/11/01|4712/12/31|Reporting Information|United States|Reporting Information
METADATA|ComponentDetail|AssignmentNumber|ComponentSequence|CardSequence|DirCardCompDefName|DirCardDefinitionName|DirInformationCategory|PayrollRelationshipNumber|EffectiveStartDate|EffectiveEndDate|DirCompFlexId|FLEX:Deduction Developer DF|_CORPORATE_OFFICER(Deduction Developer DF=HRX_US_REP_REL)|LegislativeDataGroupName
MERGE|ComponentDetail|E303510|1|1|Reporting Information|Reporting Information|HRX_US_REP_REL|303510|2024/11/01|4712/12/31|300000000630850|HRX_US_REP_REL|Y|United States
Save the data as CalculationCard.dat, zip the file and upload it. Calculation card component will be successfully created.
However, there are a few attributes for which you need to know the values in advance before preparing the HDL. One such attribute is PayrollRelationshipNumber. You can run below SQL to get the PayrollRelationshipNumber:
SELECT DISTINCT papf.person_number
,paam.assignment_status_type
,paam.assignment_number
,pprd.payroll_relationship_number ,to_char(GREATEST(to_date('2024/11/01','RRRR/MM/DD'),ppos.date_start),'RRRR/MM/DD') effective_start_date
FROM per_all_people_f papf
,per_all_assignments_m paam
,per_periods_of_service ppos
,pay_payroll_assignments ppa
,pay_pay_relationships_dn pprd
WHERE ppos.person_id = paam.person_id
AND ppos.period_of_service_id = paam.period_of_service_id
AND paam.effective_latest_change = 'Y'
AND paam.assignment_type in ( 'E', 'C')
--AND paam.assignment_status_type <> 'INACTIVE'
AND paam.person_id = papf.person_id
AND ppa.person_id (+) = papf.person_id
AND ppa.hr_assignment_id (+) = paam.assignment_id
AND ppos.date_start = (SELECT max(ppos2.date_start)
FROM per_periods_of_service ppos2
WHERE ppos.person_id = ppos2.person_id
AND period_type IN ('E','C')
)
AND pprd.payroll_relationship_id (+) = ppa.payroll_relationship_id
AND TRUNC(SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
AND (GREATEST(to_date('2024/11/01','RRRR/MM/DD'),ppos.date_start) BETWEEN paam.effective_start_date AND paam.effective_end_date
)
AND Papf.person_number IN ('303510')
Once the data is loaded successfully, you can run the below to extract card component details:
/*
Officer Code ?Calculation Cards?Reporting information (federal)?Calculation Component Details?Reporting Information?Officer Code.
*/
SELECT papf.person_number
,paam.assignment_status_type
,'Corporate Officer' Component_Information_Segment
,pdcdf.dir_information_char2 component_value_code
,(SELECT meaning
FROM fnd_lookup_values flv
WHERE lookup_code = pdcdf.dir_information_char2
AND flv.lookup_type = 'HRX_US_CORPORATE_OFFICER_CODES'
AND flv.language = 'US') component_value
,pdcdf.DIR_COMP_DETAIL_ID "DirCompDetailId"
,pdcf.dir_card_id "DirCardId"
,pdcdf.DIR_CARD_COMP_ID "DirCardCompId"
,paam.assignment_number "AssignmentNumber"
,pdccf.component_sequence "ComponentSequence"
,pdcf.card_sequence "CardSequence"
,pdccdv.component_name "DirCardCompDefName"
,pdcdv.display_name "DirCardDefinitionName"
,'HRX_US_REP_REL' "DirInformationCategory"
,pprd.payroll_relationship_number "PayrollRelationshipNumber"
,to_char(pdcdf.effective_start_date,'RRRR/MM/DD') "EffectiveStartDate"
,to_char(pdcdf.effective_end_date,'RRRR/MM/DD') "EffectiveEndDate"
,'300000000630850' "DirCompFlexId"
,'HRX_US_REP_REL' "FLEX:Deduction Developer DF"
,hikm.source_system_id
,hikm.source_system_owner
FROM per_all_people_f papf
,per_all_assignments_m paam
,per_periods_of_service ppos
,pay_payroll_assignments ppa
,pay_pay_relationships_dn pprd
,pay_dir_card_definitions_vl pdcdv
,pay_dir_card_comp_defs_vl pdccdv
,pay_dir_cards_f pdcf
,pay_dir_card_components_f pdccf
,pay_dir_comp_details_f pdcdf
,hrc_integration_key_map hikm
WHERE ppos.person_id = paam.person_id
AND ppos.period_of_service_id = paam.period_of_service_id
AND paam.effective_latest_change = 'Y'
AND paam.assignment_type in ( 'E', 'C')
--AND paam.assignment_status_type <> 'INACTIVE'
AND paam.person_id = papf.person_id
AND pdcdv.display_name = 'Reporting Information'
AND pdccdv.component_name = 'Reporting Information'
AND ppa.person_id = papf.person_id
AND pprd.payroll_relationship_id = ppa.payroll_relationship_id
AND pdcf.payroll_relationship_id = ppa.payroll_relationship_id
AND pdcf.dir_card_definition_id = pdccdv.dir_card_definition_id
AND pdcdv.dir_card_definition_id = pdccdv.dir_card_definition_id
AND pdccdv.dir_card_comp_def_id = pdccf.dir_card_comp_def_id
AND pdccf.dir_card_id = pdcf.dir_card_id
AND pdcdf.dir_card_comp_id = pdccf.dir_card_comp_id
AND TRUNC(SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
AND to_date('2024/11/01','RRRR/MM/DD') BETWEEN paam.effective_start_date AND paam.effective_end_date
AND to_date('2024/11/01','RRRR/MM/DD') BETWEEN pdccdv.effective_start_date AND pdccdv.effective_end_date
AND to_date('2024/11/01','RRRR/MM/DD') BETWEEN pdcf.effective_start_date AND pdcf.effective_end_date
AND to_date('2024/11/01','RRRR/MM/DD') BETWEEN pdccf.effective_start_date AND pdccf.effective_end_date
AND to_date('2024/11/01','RRRR/MM/DD') BETWEEN pdcdf.effective_start_date AND pdcdf.effective_end_date
AND Papf.person_number = '303510'
AND hikm.surrogate_id = pdcdf.dir_comp_detail_id
Whenever doing mass upload of participants in a performance document, there may be a need to send a notification message to the participants.
To load participants, you need to use PerfDocComplete -> Participant business object. However, if the data is loaded without using “Message” attribute in Participant business object, no message will be sent to the participants.
The notifications are determined by the setting at ‘Configure HCM Data Loader’ task under setup and maintenance. Under Business objects, choose “Performance Document” and you can see the process named – Send Notifications to Mass Loaded Participants.
But again to make this setting work, message attribute must be included in participant object.
There are scenario’s where customers want to convert their non permanent staff to permanent staff in bulk or vice versa. In HCM terminology, this causes changes in system person type of a person. Let us take an example where a customer wants to hire all the contingent workers working in the company as permanent staff.
As mentioned earlier, converting a contingent worker into worker/employee person type will change the system person type of the person. This is a two step process:
Terminate all the CWK records.
Rehire them as workers.
Performing these steps manually for 100+ records will be a tough task and is error prone process. So, it is better to use HDL for this.
First, we will terminate CWK records. Below is the sample file for same:
Sample file to upload/update/correct DFF values on department using User Keys:
METADATA|Organization|Name|ClassificationName|EffectiveStartDate|EffectiveEndDate|FLEX:PER_ORGANIZATION_UNIT_DFF|testAttribute(PER_ORGANIZATION_UNIT_DFF=Global Data Elements)
MERGE|Organization|Test Organization|Department|1951/01/01|4712/12/31|Global Data Elements|Xyz
METADATA|OrgUnitClassification|OrganizationName|ClassificationName|SetCode|EffectiveStartDate|EffectiveEndDate
MERGE|OrgUnitClassification|Test Organization|Department|COMMON|1951/01/01|4712/12/31
Sample file to load licenses and certifications to employee profile:
METADATA|TalentProfile|ProfileCode|PersonNumber|ProfileId|SourceSystemOwner|SourceSystemId
MERGE|TalentProfile|PERS_300000123456789|1234|300000123456789|HRC_SQLLOADER|PERS_300000123456789
METADATA|ProfileItem|ProfileId|ProfileCode|SectionId|ContentTypeId|ContentItem|DateFrom|DateTo|RatingModelCode1|RatingLevelCode1|RatingModelCode2|RatingLevelCode2|RatingModelCode3|RatingLevelCode3|ItemText301|ItemText302|ItemText303|SourceSystemOwner|SourceSystemId
MERGE|ProfileItem|300000123456789|PERS_300000123456789|9989|103|Oracle Global HR|2024/07/01|||||||||||HRC_SQLLOADER|HRC_SQLLOADER_PERS_300000123456789_Oracle Global HR
Sample useful queries:
Query to get source system ID and owner details for existing profiles:
select hrb.profile_id
, hikm.source_system_id
, hikm.source_system_owner
from HRT_PROFILES_ITEMS hrb,
HRC_INTEGRATION_KEP_MAP hikm
where hrb.profile_id = hikm.surrogate_id
Query to get profile id for a worker:
select papf.person_number
, hrb.profile_id
, hrb.profile_code
from HRT_PROFILES_B hrb,
PER_ALL_PEOPLE_F papf
where papf.PERSON_ID = hrb.PERSON_ID
and TRUNC(SYSDATE) BETWEEN papf.EFFECTIVE_START_DATE AND papf.EFFECTIVE_END_DATE
and papf.person_number = <>
With the security concerns regarding access to Oracle HCM applications, I have seen many customers asking for a way to restrict access to a particular DEV/TEST environment having unmasked data. One option in such scenario’s is to keep only the admin user accounts active in the particular environment and deactivate all other user accounts. This way, the user roles data is kept intact and access is restricted to only a set of limited users.
Let us now understand, the kind of users which can exist in Fusion HCM environment. There can be system users (seeded), service accounts, worker accounts (users tied to a person), standalone user accounts (for vendors/ SI partners). So, it is really important to filter the right set of user accounts which should be deactivated. Also, the method of deactivation can vary depending upon the type of user.
Bulk deactivation of users can be performed using either HDL or by using SCIM REST API. While HDL is bulk data upload tool but it has its own set of limitations. HDL can’t be used to deactivate standalone users i.e. the users which don’t have an associated person record. To deactivate standalone users, REST API should be used.
I will discuss both the approaches in details. Let us first find a way to store the admin user accounts which should remain active. My preferred way of doing this is to create a Common Lookup and add the details (user names) in this lookup. This is because lookup values can be updated easily using a spreadsheet loader.
Below is the sample lookup (XX_ACTIVE_USER_ACCOUNTS) which I created to store the admin user names:
Next step is to add the user accounts in the meaning attribute:
Two user accounts – [email protected] and [email protected] have been added. The next steps will be to filter these record from the deactivation steps.
Let us now discuss the first approach which is to deactive user accounts using HDL. Below SQL query can be used to get a list of all required active user accounts in User.dat HDL format:
SELECT 'METADATA|User|UserId|Suspended' datarow
,1 seq
FROM DUAl
UNION
SELECT 'MERGE|User|'
|| pu.user_id
|| '|Y' datarow
,2 seq
FROM per_users pu
WHERE pu.person_id IS NOT NULL
AND pu.created_by NOT IN ('anonymous')
AND pu.username NOT LIKE 'FUSION%APPS%'
AND pu.username NOT IN ('AIACS_AIAPPS_LHR_STAGE_APPID','FAAdmin','FAWService','FAWService_APPID','FIISUSER','HCMSI-98f0f163a79a46c58fa4572e41fac8ed_scim_client_APPID','IDROUser','IDRWUser', 'OCLOUD9_osn_APPID','PSCR_PROXY_USER','PUBLIC','app_monitor1','app_monitor', 'em_monitoring2','fa_monitor','faoperator','oamAdminUser','puds.pscr.anonymous.user','weblogic_idm','anonymous'
)
AND pu.suspended = 'N'
AND lower(pu.username) NOT IN (SELECT lower(flv.meaning)
FROM fnd_lookup_values flv
WHERE flv.lookup_type = 'XX_ACTIVE_USER_ACCOUNTS'
AND flv.language = 'US'
AND flv.enabled_flag = 'Y'
)
ORDER BY seq
So, the above query will return only those active user accounts which are attached to a person record and don’t exist in the custom lookup XX_ACTIVE_USER_ACCOUNTS.
**Suspended Flag in PER_USERS table indicate if the user is active (N) or inactive (Y).
Next step is to create a BIP data model and a report and save the output data in excel format. From excel, copy the data in a Notepad and save the file as User.dat.
Sample Output in excel format:
Copy the data except for “DATAROW” and paste it in a Notepad. Save the file as User.dat:
zip the User.dat file and upload it in HCM using Data Exchange -> Import and Load.
Once the load is successful, please run – ‘Send Pending LDAP Requests’ ESS job. This should deactivate all the extracted users.
You can run quick queries on per_users to make sure that the user accounts have been deactivated.
Second approach is to use SCIM REST API to bulk deactivate user accounts. I recommend to use this approach only for those users where no person record is attached to the user account.
Please check below MOS note for details on the step by step instructions on SCIM REST API:
Fusion Security: Using SCIM REST API (Doc ID 2346455.1)
Please note that in order to run this REST API, the user should have – IT Security Manager role.
{
"Operations":[
{
"method":"PATCH",
"path":"/Users/0453A72EE08D419BE0631078680AA831",
"bulkId":"100000001",
"data":{
"schemas":[
"urn:scim:schemas:core:2.0:User"
],
"active":false
}
},
{
"method":"PATCH",
"path":"/Users/0453A72EE08D419BE0631078612AA832",
"bulkId":"100000001",
"data":{
"schemas":[
"urn:scim:schemas:core:2.0:User"
],
"active":false
}
}
]
}
Please note (taken from above Oracle note):
The bulkId attribute value should be set to UNIQUE value, while creating user accounts in BULK. This is required as per IETF SCIM Specifications while creating new resources using POST method. You may use a common value for the bulkId attribute while using PATCH, DELETE, PUT methods in a Bulk operation.
The main challenge with this approach is to get the correct JSON Payload for multiple users from system. I have created a BIP report for this which will generate the output data in required JSON format. Below is the sample code:
SELECT '{
"Operations":['
data_row, 1 seq
FROM DUAL
UNION
SELECT
'{
"method":"PATCH",
"path":"/Users/'
||pu.user_guid||
'",
"bulkId":"1000000000001",
"data":{
"schemas":[
"urn:scim:schemas:core:2.0:User"
],
"active":false
}
},' data_row, 2 seq
FROM per_users pu
WHERE pu.person_id IS NOT NULL
AND pu.created_by NOT IN ('anonymous')
AND pu.username NOT LIKE 'FUSION%APPS%'
AND pu.username NOT IN ('AIACS_AIAPPS_LHR_STAGE_APPID','FAAdmin','FAWService','FAWService_APPID','FIISUSER',
'HCMSI-98f0f163a79a46c58fa4572e41fac8ed_scim_client_APPID','IDROUser','IDRWUser',
'OCLOUD9_osn_APPID','PSCR_PROXY_USER','PUBLIC','app_monitor1','app_monitor',
'em_monitoring2','fa_monitor','faoperator','oamAdminUser','puds.pscr.anonymous.user',
'weblogic_idm','anonymous'
)
AND pu.suspended = 'N'
AND lower(pu.username) NOT IN (SELECT lower(flv.meaning)
FROM fnd_lookup_values flv
WHERE flv.lookup_type = 'XX_ACTIVE_USER_ACCOUNTS'
AND flv.language = 'US'
AND flv.enabled_flag = 'Y'
)
UNION
SELECT '
]
}' data_row, 3 seq
FROM dual
ORDER BY seq
You can create a BIP data model and report to get data from this query. Extract the data in excel format and copy it to a notepad. Then you need to remove the highlighted comma in order for this JSON payload to work.
You can use SOAP UI/Postman to run the REST API and provide the output from Notepad as JSON input. Once the API runs successfully, the suspended flag will get changed to Y in per_users table.
In order to load person profile items, a person should have a profile code. There are two ways to create the profile code for a person record.
From UI :- When any business user clicks on Talent Profile for a worker, a profile code is automatically generated in the backend. Profile code is not visible in UI and is always stored in the backend.
Using HDL :- Profile codes can be loaded in bulk using TalentProfile.dat business object of HCM Data Loader (HDL).
It is advisable to load profile codes in bulk as part of data migration using HDL. But there are cases where a user will click on Talent Profile of a worker just after migrating the Core data for the worker. In this case, a profile code is generated in HRT_PROFILES_B table. So, when TalentProfile.dat is used, the profile record will fail for this particular worker.
Below SQL can be used to get list of all active workers who don’t have a talent profile code yet:
select *
from per_all_people_f papf
, per_periods_of_service ppos
where papf.person_id = ppos.person_id
and trunc(sysdate) between papf.effective_start_date and papf.effective_end_date
and ppos.actual_termination_date is NULL
and not exists (select 1 from HRT_PROFILES_B hpb where papf.person_id = hpb.person_id )
All recurring element entires are loaded/ created with an effective end date of 31-Dec-4712. But I have seen scenarios, where the business have requirements to end date an element entry as of a certain date. To achieve this in Bulk, one can use ElementEntry.dat HDL business object.
Let us take an example, where an employee has an element entry with an effective end date of 31-Dec-4712:
The effective end date is set to blank which is equivalent to 31-Dec-4712 in the backend table.
So, let us assume the business has requested to end date this particular element as of 31-Jan-2024 for all employees. To achieve, this we need to pull the existing data from element entries table for this particular element. Below SQL can be used to get the ID values and to verify the results before and after HDL load:
SELECT DISTINCT peevf.element_entry_value_id
,peef.element_entry_id
,petf.base_element_name
,peef.effective_start_date ele_sd
,peef.effective_end_date ele_ed
,peevf.effective_start_date
,peevf.effective_end_date
,paam.assignment_number
FROM per_all_assignments_m paam
,pay_element_types_f petf
,pay_element_entries_f peef
,pay_element_entry_values_f peevf
WHERE 1=1
AND paam.person_id = peef.person_id
AND peef.element_type_id = petf.element_type_id
AND peef.element_entry_id = peevf.element_entry_id
AND paam.ASSIGNMENT_TYPE in ('E')
AND paam.primary_assignment_flag = 'Y'
AND petf.base_element_name = 'Test XYZ Bonus'
AND paam.assignment_number = 'E2121212'
AND trunc(sysdate) between petf.effective_start_date and petf.effective_end_date
AND trunc(sysdate) between paam.effective_start_date and paam.effective_end_date
A normal HDL file with new effective end date will just create a date split in the data. To avoid this a new attribute called ReplaceLastEffectiveEndDate should be added in the HDL file which will updated the effective end date from 31-Dec-4712 to 31-Jan-2024.
METADATA|ElementEntry|AssignmentNumber|ElementName|EffectiveStartDate|EffectiveEndDate|LegislativeDataGroupName|MultipleEntryCount|EntryType|ReplaceLastEffectiveEndDate
MERGE|ElementEntry|E2121212|Test XYZ Bonus|2012/01/31|2024/01/31|GB Legislative Data Group|1|E|Y
Once this HDL is run successfully, the effective end date will get updted.
There can multiple grades assigned as Valid grade at Position level or Job level. However, there is no direct way to end date the valid grades in bulk.
You can end date a valid grade from responsive UI. But it will be a lot of manual effort. You can search for Position and then navigate to Grades section. Then update the position and click on small delete icon next to Grade name (which you want to end date). This will end date the valid grade with an effective end date = date of position update – 1.
To do this in bulk using HDL, you can’t use DELETE command. If you use DELETE command, it will completely Purge the valid grade record from Position. To end date the valid grade, use “ReplaceLastEffectiveEndDate” attribute in the file.
Below is the sample file:
METADATA|PositionGrade|BusinessUnitName|PositionCode|EffectiveStartDate|EffectiveEndDate|GradeCode|GradeSetCode|ReplaceLastEffectiveEndDate
MERGE|PositionGrade|Progress US Business Unit|PRGUSPOS032|2018/12/31|2023/12/31|Hourly01|PRGUSGRADESET|Y
EffectiveStartDate – Earliest Grade Start Date
EffectiveEndDate – Date on which you want to end date the grade.
Once the file is loaded successfully, below is how the data will look in the backend:
Below SQL query can be used to extract valid grades data:
SELECT DISTINCT
TO_CHAR (pvgf.effective_start_date, 'DD/MON/YYYY') effective_start_date,
TO_CHAR (pvgf.effective_end_date, 'DD/MON/YYYY') effective_end_date,
pjfv.POSITION_CODE,
pjfv.name job_name,
pgfv.grade_code,
pgfv.name grade_name,
pvgf.valid_grade_id,
pgfv.grade_id,
pjfv.job_id
FROM per_valid_grades_f pvgf,
HR_ALL_POSITIONS_F_VL pjfv,
per_grades_f_vl pgfv
WHERE 1=1
AND pvgf.position_id = pjfv.position_id
AND pvgf.grade_id = pgfv.grade_id
AND pjfv.POSITION_CODE = 'PRGUSPOS032'
AND pvgf.effective_start_date BETWEEN pjfv.effective_start_date AND pjfv.effective_end_date
AND pvgf.effective_start_date BETWEEN pgfv.effective_start_date AND pgfv.effective_end_date
ORDER BY POSITION_CODE,grade_code
To bulk upload items catalogs in HCM profiles, you can use ContentItem.dat. Each of the template have certain mandatory attributes like Context Name, Value Set Name or Value Set Id:
S0, before you start preparing the file, you need to have below information handy:
Context Name – This is a mandatory attribute. If you don’t pass the value in your HDL file, you will get below error:
The values 3000122xxxxx aren’t valid for ContentItemValueSetId.
You can get the context name from HRT_CONTENT_TYPES_B table:
2. Content Item Value Set Name/Id: This is again a mandatory attribute. You can get the Content Value Set Name/ Id from HRT_CONTENT_TP_VALUESETS_TL table.
Once you have the details, you can prepare ContentItem.dat.
Below are the sample files for different item catalog templates:
For Establishments:
METADATA|ContentItem|Name|ContextName|ContentItemValueSetName|ContentItemValueSetId|ContentItemCode|DateFrom|DateTo|RatingModelCode|SourceSystemId|SourceSystemOwner
MERGE|ContentItem|Indian Institute of Technology, Bombay|EDUCATIONAL_ESTABLISHMENT|Establishment||IIT_B|1951/01/01|||IIT_B|HRC_SQLLOADER
MERGE|ContentItem|Indian Institute of Management, Ahemdabad|EDUCATIONAL_ESTABLISHMENT|Establishment||IIM_A|1951/01/01|||IIT_B|HRC_SQLLOADER
For Licenses and Certifications:
METADATA|ContentItem|Name|ContextName|ContentItemValueSetName|ContentItemValueSetId|ContentItemCode|DateFrom|DateTo|RatingModelCode|SourceSystemId|SourceSystemOwner
MERGE|ContentItem|Oracle Global Human Resources 2023|CERTIFICATION|Licenses and Certifications||O_GHR_2023|1951/01/01|||O_GHR_2023|HRC_SQLLOADER MERGE|ContentItem|Oracle Benefits 2023|CERTIFICATION|Licenses and Certifications||O_BEN_2023|1951/01/01|||O_BEN_2023|HRC_SQLLOADER
For Degrees:
METADATA|ContentItem|Name|ContextName|ContentItemValueSetName|ContentItemValueSetId|ContentItemCode|DateFrom|DateTo|RatingModelCode|SourceSystemId|SourceSystemOwner
MERGE|ContentItem|PhD|DEGREE|Degrees||XX_PhD|1951/01/01|||CI_XX_PhD|HRC_SQLLOADER
MERGE|ContentItem|Higher National Certificate|DEGREE|Degrees||XX_Higher National Certificate|1951/01/01|||CI_XX_Higher National Certificate|HRC_SQLLOADER
For Competencies:
METADATA|ContentItem|ContextName|ContentItemValueSetName|Name|ContentItemId|ContentItemCode|DateFrom|DateTo|ItemDescription|RatingModelId|RatingModelCode|CountryGeographyCode|CountryCountryCode|SourceSystemId|SourceSystemOwner
MERGE|ContentItem|COMPETENCY|Competencies|Accounting Standards and Principles||XX_ASAP|1951/01/01||To check knowledge on Accounting Standards and Principles.|5|PROFICIENCY|||XX_ASAP|HRC_SQLLOADER
MERGE|ContentItem|COMPETENCY|Competencies|Assessing Talent||XX_AT|1951/01/01||To check knowledge on Assessing Talent.|5|PROFICIENCY|||XX_AT|HRC_SQLLOADER
MERGE|ContentItem|COMPETENCY|Competencies|Assurance and Reporting||XX_AAR|1951/01/01||To check knowledge on Assurance and Reporting.|5|PROFICIENCY|||XX_AAR|HRC_SQLLOADER
Please note that Rating Mode Id is mandatory for loading competencies. You can find the rating model id and rating model code from hrt_rating_models_b table.
Once the data load is successful, you can run below queries to extract loaded data:
In this article, I will talk about affect of using SET PURGE_FUTURE_CHANGES in HCM Data Loader file. HCM use date tracked functionality for most of the objects in HCM, be it work structures, worker or payroll related objects.
I have seen many scenarios, where I had to go in past and make an update on the existing past date without touching the current existing record. There are numerous scenarios like these where HCM technical consultants are expected to make use of HDL to update the past dated records. Oracle has designed PURGE_FUTURE_CHANGES SET command specifically for these scenarios.
But before adding this command in your HDL (.dat) file, it is very important to understand the working of this command, otherwise it may unwantedly purge/ change data which is beyond recovery.
Let us start with an example. Below is existing employment data for a worker with Person Number – 120:
Later on, it has been found that Grade should have been changed Grade 6 from 16-Jun-2022 onwards but there is no data for this in the database currently..
So, in this case a new row with Grade update to Grade 6 should be inserted starting on 16-Jun-2022.
So, the technical consultant has created an an HDL with below format:
As you can see, EffectiveStartDate = 2022/06/16. So, the intention here is to insert a new row starting 2022/06/16.
But unfortuantely, running above HDL will replace all the existing future rows, i.e. the row with effective start date of 2023/01/01 will be purged from database. Below is how the new data will look once the above HDL load is done:
This happens because the default system setting for update mode is set to REPLACE. This can be verified using “Configure HCM Data Loader” task from Setup and Maintenance:
So, in order to preserve the future dated rows, you have to use SET command:
SET PURGE_FUTURE_CHANGES N
METADATA|WorkTerms|AssignmentId|PeriodOfServiceId|EffectiveLatestChange|EffectiveSequence|EffectiveStartDate|EffectiveEndDate|SourceSystemId|SourceSystemOwner|ActionCode
MERGE|WorkTerms|300000066966135|300000066966134|Y|1|2022/06/16|4712/12/31|300000066966135|FUSION|ASG_CHANGE
METADATA|Assignment|AssignmentId|WorkTermsAssignmentId|EffectiveLatestChange|EffectiveSequence|EffectiveStartDate|EffectiveEndDate|SourceSystemId|SourceSystemOwner|ActionCode|GradeCode
MERGE|Assignment|300000066966140|300000066966135|Y|1|2022/06/16|4712/12/31|300000066966140|FUSION|ASG_CHANGE|GRADE6
Executing this will preserve the future dated rows. But it will change the data in future rows as per current row data.
Now if the requirement is to keep the future dated rows and their data intact, you should pass #RETAIN in EffectiveEndDate attribute as shown in below example:
SET PURGE_FUTURE_CHANGES N
METADATA|WorkTerms|AssignmentId|PeriodOfServiceId|EffectiveLatestChange|EffectiveSequence|EffectiveStartDate|EffectiveEndDate|SourceSystemId|SourceSystemOwner|ActionCode
MERGE|WorkTerms|300000066966135|300000066966134|Y|1|2022/06/16|#RETAIN|300000066966135|FUSION|ASG_CHANGE
METADATA|Assignment|AssignmentId|WorkTermsAssignmentId|EffectiveLatestChange|EffectiveSequence|EffectiveStartDate|EffectiveEndDate|SourceSystemId|SourceSystemOwner|ActionCode|GradeCode
MERGE|Assignment|300000066966140|300000066966135|Y|1|2022/06/16|#RETAIN|300000066966140|FUSION|ASG_CHANGE|GRADE6
With #RETAIN value of Asg Attribute 1 is retained on 01-Jan-2023 row.
Absence reasons can be loaded in BULK using AbsenceReason.dat.
BaseName is a mandatory attribute in the file. If BaseName is not passed, user get below error:
An error occurred. To review details of the error run the HCM Data Loader Error Analysis Report diagnostic test. Message details: {MESSAGE}.
An error occurred. To review details of the error run the HCM Data Loader Error Analysis Report diagnostic test. Message details: Please check the stack trace for more details.
BaseName has to be provided in UPPER CASE concatenated with an underscore (_) and legislation code.
Example: TEST ABSENCE REASON_US
If the above format is not followed, HDL throws below error:
HCM Data Loader supports bulk uploading for Agents for Oracle recruiting cloud. Currently, there is no HDL support for loading Agencies. And idea is already submitted for same:
There are scenarios when we want to delete a future dated row from an object. Please note that for objects like Positions/Locations etc which are date tracked, one can make use of HDL with SET command to delete the future dated row.
Let us take an example, where we have below data on Position:
Position Name – Test Position
Effective Start Date – 01- Jan-2023 – Record creation
Effective Start Date – 01-Oct-2023 – Record updated (let us say Standard working hours).
Now the requirement is to delete the row with effective start date – 01-Oct-2023.
In such cases, below HDL can be used:
SET PURGE_FUTURE_CHANGES Y
METADATA|Position|BusinessUnitName|PositionCode|EffectiveStartDate|EffectiveEndDate
MERGE|Position|BU Name|Pos Code|2023/01/01|4712/12/31
Recently, I faced a scenario for a customer where after go-live, where there was an issue found with absence plan configuration. The absence plan was incorrectly set up. The Balance Frequency Source wasn’t setup to “Repeating Period”, so the accrual was calculated incorrectly. In order to fix this, a new absence plan was created and the absence entries were made against the new plan.
So, the approach taken was to take a backup of all absence entries from PROD, enroll the employees into new plan and reupload the absence entries.
A BIP report was developed to take a backup of existing absence entries in HDL format. Below is the query for same:
Once the data is extracted, you need to make sure that Source System Owner is updated from FUSION to HRC_SQLLOADER. Source System Owner is set to FUSION when an entry is created from UI.
Once the output of the BIP is ready, Change “MERGE” to “DELETE” to delete all the absence entries. Then enroll the workers in new plan, do the required changes in BIP extract and upload the data back in Fusion HCM.