select papf.person_number
,hdor.document_code
,hdor.document_name
,hdor.document_number
,fdt.file_name
,fdt.dm_version_number document_id
,fdt.dm_document_id UCM_file
from fnd_attached_documents fad, hr_documents_of_record hdor, fnd_documents_tl fdt
,per_all_people_f papf
,hr_document_types_tl hdtt
where ENTITY_NAME = 'HR_DOCUMENTS_OF_RECORD'
and hdor.documents_of_record_id = fad.PK1_VALUE
and fad.document_id = fdt.document_id
and fdt.language = 'US'
and hdor.person_id = papf.person_id
and TRUNC(SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
and hdor.document_type_id = hdtt.document_type_id
and hdtt.language = 'US'
and hdtt.DOCUMENT_TYPE = 'Test Doc'
Post P2T, there is always a requirement to mask the candidate email addresses in lower environments. Below SQL can be used to generate data masked email data in HDL format:
METADATA|DocumentsOfRecord|PersonNumber|Country|DocumentType|DocumentCode|DateFrom|DateTo
MERGE|DocumentsOfRecord|78652|United Kingdom|UK Vehicle Information|OVERTIME_AVERAGING_AGREEMENT_2020-10-01|2020/10/01|2022/10/01
METADATA|DocumentAttachment|PersonNumber|Country|DocumentType|DocumentCode|DataTypeCode|Title|URLorTextorFileName|File
MERGE|DocumentAttachment|78652|United Kingdom|UK Vehicle Information|OVERTIME_AVERAGING_AGREEMENT_2020-10-01|FILE|DD test document of record for HDL.pdf|DD test document of record for HDL.pdf|DD test document of record for HDL.pdf
Please note that under attribute country, you will need to pass the full country name. Passing country code like GB, will throw below error:
The values GB aren’t valid for the attribute LegislationCode.
Use the below SQL to extract assignment rules defined at course level:
SELECT wlift.NAME
,war.DEFAULT_INITIAL_REC_SUBSTA
,war.DEFAULT_INITIAL_ACT_SCORE
,war.DEFAULT_INITIAL_COMMENTS
,war.DEFAULT_INITIAL_COMPL_DATE
,war.DEFAULT_INITIAL_TOTACT_EFF
,war.DEFAULT_INITIAL_REASON_CODE
,war.INITIAL_DUE_DATE_OPTION
,war.INITIAL_DUE_DATE
,war.INITIAL_DUE_IN_DAYS
,war.VALIDITY_OPTION
,war.EXPIRY_OPTION
,war.EXPIRY_IN_DAYS
,war.EXPIRY_DATE
,war.EXPIRY_IN_NUM_YRS
,war.RENEWAL_OPTIONS
,war.RENEWAL_BEFORE_EXPIRY_DAYS
FROM WLF_ASSIGNMENT_RULES war
,WLF_LEARNING_ITEMS_F_TL wlift
,WLF_LEARNING_ITEMS_F wlif
WHERE war.ASSIGNMENT_RULE_ID = wlif.ASSIGNMENT_RULE_ID
AND wlif.LEARNING_ITEM_ID = wlift.LEARNING_ITEM_ID
AND wlift.language = 'US'
AND TRUNC(sysdate) BETWEEN wlif.effective_start_date AND wlif.effective_end_date
AND TRUNC(sysdate) BETWEEN wlift.effective_start_date AND wlift.effective_end_date
This data can be extracted from Fusion HCM using a simple BIP query:
SELECT DATA_ROW
FROM (
SELECT 'METADATA|ScheduleAssignment|ResourceType|ScheduleAssignmentId|SourceSystemId|SourceSystemOwner' AS DATA_ROW
FROM DUAL
UNION all
select 'DELETE|ScheduleAssignment|',
RESOURCE_TYPE,
SCHEDULE_ASSIGNMENT_ID,
source_system_id,
source_system_owner AS DATA_ROW
from fusion.per_schedule_assignments psa
,hrc_integration_key_map hikm
where psa.SCHEDULE_ASSIGNMENT_ID = hikm.surrogate_id
)
Use below SQL query to extract email addresses data in HDL format to DELETE the data:
SELECT DATA_ROW
FROM (
SELECT 'METADATA|PersonEmail|EmailAddressId|PersonId|DateFrom|DateTo|EmailType|EmailAddress|SourceSystemId|SourceSystemOwner' AS DATA_ROW
FROM DUAL
UNION all
select 'DELETE'||'|'||
'PersonEmail' ||'|'||
pea.email_address_id||'|'||
pea.person_id ||'|'||
to_char(pea.date_from,'RRRR/MM/DD')||'|'||
to_char(pea.date_to,'RRRR/MM/DD') ||'|'||
pea.email_type ||'|'||
pea.email_address ||'|'||
(select email_hrc.source_system_id
from hrc_integration_key_map email_hrc
WHERE pea.email_Address_id = email_hrc.surrogate_id) ||'|'||
(select email_hrc.source_system_owner
from hrc_integration_key_map email_hrc
WHERE pea.email_Address_id = email_hrc.surrogate_id) AS DATA_ROW
from per_email_addresses pea
)
If you only want to update the email addresses not delete them, then use refer below post:
Oracle has released a new subject area for Areas of Responsibility.
Subject Area Name – Workforce Management – Areas of Responsibility
This has simplified the process of extracting AoR data using OTBI.
To get any data in this subject area, you should run “Refresh Representative Data” ESS process. The process should be run post any changes done to AoR’s.
To get data in this SA below duties should be assigned:
Oracle has provided a new feature where working hours for each day can be stored against assignment working hours.
The data is stored in PER_WORKING_HOUR_PATTERNS_F table.
Use below query to extract the data:
SELECT papf.person_number, pwhpf.*
FROM PER_WORKING_HOUR_PATTERNS_F pwhpf
,PER_ALL_ASSIGNMENTS_M paam
,PER_ALL_PEOPLE_F papf
WHERE pwhpf.OBJECT_ID = paam.ASSIGNMENT_ID
AND paam.PERSON_ID = papf.PERSON_ID
AND papf.person_number = '10011'
AND paam.assignment_type = 'E'
AND TRUNC(SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
AND TRUNC(SYSDATE) BETWEEN paam.effective_start_date AND paam.effective_end_date
AND paam.effective_start_date BETWEEN pwhpf.effective_start_date AND pwhpf.effective_end_date
Query to extract data in HDL format:
SELECT
'METADATA|WorkingHourPattern|WorkingHourPatternId|EffectiveStartDate|EffectiveEndDate|Object|ObjectId|AssignmentNumber|PersonId|ReplaceFirstEffectiveStartDate' as DATA_ROW, 1 ORDERBY From dual
UNION
SELECT 'MERGE|WorkingHourPattern|'
|| pwhpf.WORKING_HOUR_PATTERN_ID
|| '|'
|| TO_CHAR(pwhpf.EFFECTIVE_START_DATE,'YYYY/MM/DD')
|| '|'
|| TO_CHAR(pwhpf.EFFECTIVE_END_DATE,'YYYY/MM/DD')
|| '|ASSIGNMENT|'
|| pwhpf.object_id
|| '|'
|| paam.assignment_number
|| '|'
|| paam.person_id
|| '|'
|| 'Y'
as DATA_ROW, 2 ORDERBY
FROM PER_WORKING_HOUR_PATTERNS_F pwhpf
,PER_ALL_ASSIGNMENTS_M paam
WHERE paam.assignment_id = pwhpf.object_id
AND paam.assignment_type = 'E'
AND TRUNC(SYSDATE) BETWEEN paam.effective_start_date AND paam.effective_end_date
AND paam.effective_start_date BETWEEN pwhpf.effective_start_date AND pwhpf.effective_end_date;
Oracle provides a process Purge Person Data in Test Environments to remove Person/Contact/Worker records from a test environment of Oracle HCM. Please note that HDL for core worker object doesn’t support DELETE, so there is no other option to purge the Person/Contact/Worker information from Oracle HCM once a record is created. In this case, Purge Person Data in Test Environments is definitely a useful utility which helps to purge person related data from a test environment.
This process can’t be run in Oracle HCM Production.
Before 21D, the process required an additional step to get a key to enable to process in Test environments but from 21D onwards the process is enabled by default in all test environments.
You can find more details related to this process on below link:
Click on schedule new process –> Search for Purge Person Data in Test Environments
Supply the parameters
Sample Person SQL’s:
To remove only Contact Person records:
SELECT DISTINCT papf.person_id FROM per_all_people_f papf WHERE 1=1 AND NOT EXISTS (SELECT 1 FROM per_periods_of_service ppos where ppos.person_id = papf.person_id)
To remove all person records:
SELECT DISTINCT papf.person_id FROM per_all_people_f papf
Use below SQL to monitor the progress of entries in each table:
SELECT a.*
FROM
(
SELECT 'ANC_PER_ABS_ENTRIES' table_name, count(*) rowcount FROM ANC_PER_ABS_ENTRIES
UNION
SELECT 'ANC_PER_ABS_ENTRY_DTLS' table_name, count(*) rowcount FROM ANC_PER_ABS_ENTRY_DTLS
UNION
SELECT 'ANC_PER_ABS_MATERNITY' table_name, count(*) rowcount FROM ANC_PER_ABS_MATERNITY
UNION
SELECT 'ANC_PER_PLAN_ENROLLMENT' table_name, count(*) rowcount FROM ANC_PER_PLAN_ENROLLMENT
UNION
SELECT 'ANC_PER_ABS_PLAN_ENTRIES' table_name, count(*) rowcount FROM ANC_PER_ABS_PLAN_ENTRIES
UNION
SELECT 'ANC_PER_ACCRUAL_ENTRIES' table_name, count(*) rowcount FROM ANC_PER_ACCRUAL_ENTRIES
UNION
SELECT 'ANC_PER_ACRL_ENTRY_DTLS' table_name, count(*) rowcount FROM ANC_PER_ACRL_ENTRY_DTLS
UNION
SELECT 'CMP_SALARY' table_name, count(*) rowcount FROM CMP_SALARY
UNION
SELECT 'PAY_ASSIGNED_PAYROLLS_DN' table_name, count(*) rowcount FROM PAY_ASSIGNED_PAYROLLS_DN
UNION
SELECT 'PAY_ASSIGNED_PAYROLLS_F' table_name, count(*) rowcount FROM PAY_ASSIGNED_PAYROLLS_F
UNION
SELECT 'PER_ADDRESSES_F' table_name, count(*) rowcount FROM PER_ADDRESSES_F
UNION
SELECT 'PER_ALL_ASSIGNMENTS_M' table_name, count(*) rowcount FROM PER_ALL_ASSIGNMENTS_M
UNION
SELECT 'PER_ALL_PEOPLE_F' table_name, count(*) rowcount FROM PER_ALL_PEOPLE_F
UNION
SELECT 'PER_ASSIGN_WORK_MEASURES_F' table_name, count(*) rowcount FROM PER_ASSIGN_WORK_MEASURES_F
UNION
SELECT 'PER_CITIZENSHIPS' table_name, count(*) rowcount FROM PER_CITIZENSHIPS
UNION
SELECT 'PER_DRIVERS_LICENSES' table_name, count(*) rowcount FROM PER_DRIVERS_LICENSES
UNION
SELECT 'PER_EMAIL_ADDRESSES' table_name, count(*) rowcount FROM PER_EMAIL_ADDRESSES
UNION
SELECT 'PER_ETHNICITIES' table_name, count(*) rowcount FROM PER_ETHNICITIES
UNION
SELECT 'PER_NATIONAL_IDENTIFIERS' table_name, count(*) rowcount FROM PER_NATIONAL_IDENTIFIERS
UNION
SELECT 'PER_PASSPORTS' table_name, count(*) rowcount FROM PER_PASSPORTS
UNION
SELECT 'PER_PEOPLE_LEGISLATIVE_F' table_name, count(*) rowcount FROM PER_PEOPLE_LEGISLATIVE_F
UNION
SELECT 'PER_PERIODS_OF_SERVICE' table_name, count(*) rowcount FROM PER_PERIODS_OF_SERVICE
UNION
SELECT 'PER_PERSON_ADDR_USAGES_F' table_name, count(*) rowcount FROM PER_PERSON_ADDR_USAGES_F
UNION
SELECT 'PER_PERSON_NAMES_F' table_name, count(*) rowcount FROM PER_PERSON_NAMES_F
UNION
SELECT 'PER_PERSON_TYPE_USAGES_M' table_name, count(*) rowcount FROM PER_PERSON_TYPE_USAGES_M
UNION
SELECT 'PER_PERSONS' table_name, count(*) rowcount FROM PER_PERSONS
UNION
SELECT 'PER_PHONES' table_name, count(*) rowcount FROM PER_PHONES
UNION
SELECT 'PER_RELIGIONS' table_name, count(*) rowcount FROM PER_RELIGIONS
UNION
SELECT 'PER_VISAS_PERMITS_F' table_name, count(*) rowcount FROM PER_VISAS_PERMITS_F
UNION
SELECT 'PAY_ELEMENT_ENTRIES_F' table_name, count(*) rowcount FROM PAY_ELEMENT_ENTRIES_F
UNION
SELECT 'PAY_ELEMENT_ENTRY_VALUES_F' table_name, count(*) rowcount FROM PAY_ELEMENT_ENTRY_VALUES_F
UNION
SELECT 'PER_WORKING_HOUR_PATTERNS_F' table_name, count(*) rowcount FROM PER_WORKING_HOUR_PATTERNS_F
UNION
SELECT 'PER_ASSIGNMENT_EXTRA_INFO_M' table_name, count(*) rowcount FROM PER_ASSIGNMENT_EXTRA_INFO_M
UNION
SELECT 'PER_ASSIGNMENT_SUPERVISORS_F' table_name, count(*) rowcount FROM PER_ASSIGNMENT_SUPERVISORS_F
UNION
SELECT 'PER_PEOPLE_EXTRA_INFO_F' table_name, count(*) rowcount FROM PER_PEOPLE_EXTRA_INFO_F
UNION
SELECT 'HR_DOCUMENTS_OF_RECORD ' table_name, count(*) rowcount FROM HR_DOCUMENTS_OF_RECORD
UNION
SELECT 'PER_CONTACT_RELATIONSHIPS ' table_name, count(*) rowcount FROM PER_CONTACT_RELATIONSHIPS
) a
where
a.rowcount <> 0
You can add/remove more tables based on data in your environment.
Stats:
Normally the process takes around 5-6 hrs for 16-17K employees. Performance depends upon environment sizing as well among other factors.
Use below query to extract disability org details from Fusion HCM:
SELECT houft.name disability_org_name
,TO_CHAR(haouf.effective_start_date,'RRRR/MM/DD') disability_org_start_date
,TO_CHAR(haouf.effective_end_date,'RRRR/MM/DD') disability_org_end_date
,houcf.classification_code
,haouf.internal_address_line
,houcf.status
FROM hr_org_unit_classifications_f houcf
, hr_all_organization_units_f haouf
, hr_organization_units_f_tl houft
WHERE houcf.organization_id = haouf.organization_id
AND houft.organization_id = haouf.organization_id
AND TRUNC(SYSDATE) BETWEEN houcf.effective_start_date AND houcf.effective_end_date
AND TRUNC(SYSDATE) BETWEEN haouf.effective_start_date AND haouf.effective_end_date
AND haouf.effective_start_date BETWEEN houft.effective_start_date AND houft.effective_end_date
AND houft.language = USERENV('LANG')
AND houcf.category_code = 'HCM_DISABILITY_ORGANIZATION'
Sometime for data migration, it is required to get a dump of all the absence reasons configured in Fusion HCM in order to validate the data being loaded.
Below query can be used to extract the setup data for absence reasons:
SELECT aarft.name absence_reason_name
,to_char(aarf.effective_start_date,'YYYY/MM/DD') reason_start_date
,to_char(aarf.effective_end_date,'YYYY/MM/DD') reason_end_date
,aarf.legislation_code legislation_code
,aarf.status reason_status
,aarft.description reason_description
,aarf.base_name reason_base_name
FROM ANC_ABSENCE_REASONS_F aarf
,ANC_ABSENCE_REASONS_F_TL aarft
WHERE aarf.absence_reason_id=aarft.absence_reason_id
AND aarf.effective_start_date BETWEEN aarft.effective_start_date AND aarft.effective_end_date
AND aarft.language=userenv('LANG')
There is a common requirement to use value sets in various DFF attributes across Fusion HCM application. Oracle HCM supports different kind of value sets – Independent, Table validated etc.
There is a use case where a user wants to see a list of values containing numbers from 1 to n. There are two approaches to create a value set for this. Either you can define an Independent value set with 1 to n unique values or you can define a table validated value set.
Independent value set approach works fine if the value of n is small. But if the value of n is a large integer for example, n = 60, then it is not pragmatic to go ahead with a independent value set.
See the below example to define a table validated value set which will display values ranging from 1 to 60:
FROM Clause
(SELECT TO_NUMBER( LEVEL) value FROM DUAL CONNECT BY LEVEL <= 60) a
When loading Candidate data using HDL, the values of Source attributes (Source and Source Medium) are required. These values are configured in UI using task ‘Manage Candidate Dimension Source Names’.
Use below SQL to get the values:
SELECT DISTINCT
source_medium_url_value,
source_url_value
FROM
fusion.IRC_DIMENSION_DEF_VL
select * from (
SELECT DISTINCT
papf.person_number,
paam.assignment_number,
pgf.name grade_name,
pgsf.name grade_step_name,
prv.value rate_value
FROM
PER_GRADES_F_tl PGF,
PER_GRADES_F PG,
PER_ALL_ASSIGNMENTS_M PAAM,
PER_ALL_PEOPLE_F PAPF,
PER_ASSIGN_GRADE_STEPS_F pagsf,
PER_GRADE_STEPS_F_TL PGSF ,
PER_RATE_VALUES_F prv
WHERE
1=1
AND PG.grade_id = PGF.grade_id
AND PAAM.grade_id = PGF.grade_id
AND paam.person_id = papf.person_id
and paam.assignment_type = 'E'
and PGF.language = 'US'
AND TRUNC(SYSDATE) BETWEEN PAPF.effective_start_date AND PAPF.effective_end_date
AND TRUNC(SYSDATE) BETWEEN PAAM.effective_start_date AND PAAM.effective_end_date
AND TRUNC(SYSDATE) BETWEEN PGF.effective_start_date AND PGF.effective_end_date
AND TRUNC(SYSDATE) BETWEEN PG.effective_start_date AND PG.effective_end_date
AND pagsf.assignment_id (+) = paam.assignment_id
and pagsf.grade_step_id = PGSF.grade_step_id (+)
and paam.effective_start_date between pagsf.effective_start_date (+) and pagsf.effective_end_date (+)
and paam.effective_start_date between PGSF.effective_start_date (+) and PGSF.effective_end_date (+)
and PGSF.language (+) = 'US'
and PGSF.grade_step_id = PRV.rate_object_id (+)
) a
where grade_step_name is not null
There are many scenarios where a table validated value set is created which is then called in fast formulas. using GET_VALUE_SET function using a parameter.
For developers, it is a headache to test the value coming from value set as they have to write log statements in fast formula to debug that.
One easy way of doing that is to use pay_ff_functions.gvs function from BIP.
Sample code:
SELECT pay_ff_functions.gvs ('EMP_NATIONALITY','|=P_PERSON_ID='''||'1123121') VALUE FROM DUAL
There are cases where you need to run HCM Extracts on different schedules which are not available in standard schedule. For example, we need to run an HCM Extracts after 6 hours. In this case, a custom flow schedule fast formula can be created.
Sample FF to run HCM Extracts at 4 AM, 10 AM, 4 PM and 10 PM: