Search for:
HDL – Delete Person Email addresses

Use below SQL query to extract email addresses data in HDL format to DELETE the data:

SELECT DATA_ROW
FROM (
SELECT 'METADATA|PersonEmail|EmailAddressId|PersonId|DateFrom|DateTo|EmailType|EmailAddress|SourceSystemId|SourceSystemOwner'  AS DATA_ROW
FROM DUAL
UNION all
select 'DELETE'||'|'||
       'PersonEmail' ||'|'||
       pea.email_address_id||'|'||
       pea.person_id ||'|'||
       to_char(pea.date_from,'RRRR/MM/DD')||'|'||
       to_char(pea.date_to,'RRRR/MM/DD') ||'|'||
       pea.email_type ||'|'||
       pea.email_address ||'|'||
      (select email_hrc.source_system_id
         from hrc_integration_key_map email_hrc
        WHERE pea.email_Address_id = email_hrc.surrogate_id) ||'|'||
      (select email_hrc.source_system_owner
         from hrc_integration_key_map email_hrc
        WHERE pea.email_Address_id = email_hrc.surrogate_id) AS DATA_ROW
  from per_email_addresses pea
)

If you only want to update the email addresses not delete them, then use refer below post:

HDL – Email Data Obfuscation in Test environment – Welcome to Fusion HCM Consulting

HDL – Delete Element Entries Data

Query to extract Element entry details data in HDL format:

SELECT DATA_ROW
FROM (
SELECT 'METADATA'
|| CHR (124) || 'ElementEntry'
|| CHR (124) || 'ElementEntryId'
|| CHR (124) || 'EffectiveStartDate'
|| CHR (124) || 'SourceSystemId' 
|| CHR (124) || 'SourceSystemOwner'  AS DATA_ROW
FROM DUAL
UNION all
select  'DELETE'
|| CHR (124) || 'ElementEntry'
|| CHR (124) || peef.element_entry_id
|| CHR (124) || to_char(peef.effective_start_date,'RRRR/MM/DD')
|| CHR (124) || hikm.source_system_id
|| CHR (124) || hikm.source_system_owner
FROM PAY_ELEMENT_ENTRIES_F peef,
	 HRC_INTEGRATION_KEY_MAP hikm
WHERE hikm.surrogate_id = peef.element_entry_id

)
OTBI – Areas of Responsibility Subject area

Oracle has released a new subject area for Areas of Responsibility.

Subject Area Name – Workforce Management – Areas of Responsibility

This has simplified the process of extracting AoR data using OTBI.

To get any data in this subject area, you should run “Refresh Representative Data” ESS process. The process should be run post any changes done to AoR’s.

To get data in this SA below duties should be assigned:

FBI_AREAS_OF_RESPONSIBILITY_TRANSACTION_ANALYSIS_DUTY 
ORA_FBI_AREAS_OF_RESPONSIBILITY_TRANSACTION_ANALYSIS_DUTY_HCM

Please note that these duties should be added to Job Role not the Data Role.

BIP – Queries related to Bank, Bank Branches and External Bank accounts

Query for bank details:

select * from ce_banks_v

Query for bank branch details:

select * from ce_bank_branches_v
select cbbv.bank_name
      ,cbbv.bank_branch_name
      ,cbbv.bank_home_country
      ,cbbv.branch_number 
      ,TO_CHAR(cbbv.start_date,'YYYY/MM/DD') branch_start_date
      ,TO_CHAR(cbbv.end_date,'YYYY/MM/DD') branch_end_date
  from ce_bank_branches_v cbbv

Query to extract Internal bank accounts:

select * from ce_bank_accounts

Query to extract External bank accounts:

select * from iby_ext_bank_accounts
BIP – Query to extract absence reasons from Oracle HCM

Sometime for data migration, it is required to get a dump of all the absence reasons configured in Fusion HCM in order to validate the data being loaded.

Below query can be used to extract the setup data for absence reasons:

SELECT aarft.name absence_reason_name
      ,to_char(aarf.effective_start_date,'YYYY/MM/DD') reason_start_date
      ,to_char(aarf.effective_end_date,'YYYY/MM/DD') reason_end_date
      ,aarf.legislation_code legislation_code
      ,aarf.status reason_status
      ,aarft.description reason_description
      ,aarf.base_name reason_base_name
  FROM ANC_ABSENCE_REASONS_F aarf
      ,ANC_ABSENCE_REASONS_F_TL aarft
 WHERE aarf.absence_reason_id=aarft.absence_reason_id
   AND aarf.effective_start_date BETWEEN aarft.effective_start_date AND aarft.effective_end_date
   AND aarft.language=userenv('LANG')
BIP – Query to extract Worker Grade, Grade Step and Rate value
select * from (
SELECT DISTINCT
papf.person_number,
paam.assignment_number,
pgf.name grade_name,
pgsf.name grade_step_name,
prv.value rate_value
FROM
PER_GRADES_F_tl PGF,
PER_GRADES_F PG,
PER_ALL_ASSIGNMENTS_M PAAM,
PER_ALL_PEOPLE_F PAPF,
PER_ASSIGN_GRADE_STEPS_F pagsf,
PER_GRADE_STEPS_F_TL PGSF ,
PER_RATE_VALUES_F prv
WHERE 
1=1
AND PG.grade_id = PGF.grade_id
AND PAAM.grade_id = PGF.grade_id
AND paam.person_id = papf.person_id
and paam.assignment_type = 'E'
and PGF.language = 'US'
AND TRUNC(SYSDATE) BETWEEN PAPF.effective_start_date AND PAPF.effective_end_date
AND TRUNC(SYSDATE) BETWEEN PAAM.effective_start_date AND PAAM.effective_end_date
AND TRUNC(SYSDATE) BETWEEN PGF.effective_start_date AND PGF.effective_end_date
AND TRUNC(SYSDATE) BETWEEN PG.effective_start_date AND PG.effective_end_date
AND pagsf.assignment_id (+) = paam.assignment_id
    and pagsf.grade_step_id = PGSF.grade_step_id (+)
	and paam.effective_start_date between pagsf.effective_start_date (+) and pagsf.effective_end_date (+)
	and paam.effective_start_date between PGSF.effective_start_date (+) and PGSF.effective_end_date (+)
	and PGSF.language (+) = 'US'
	and PGSF.grade_step_id = PRV.rate_object_id (+)
) a 
where grade_step_name is not null
BIP – Restricting succession plans access to logged in person in BIP reports

Succession Plans data is stored in HRM_PLANS table. A succession plan can a PUBLIC or PRIVATE plan. By default if any user accesses HRM_PLANS table, user will be able to see all plans data irrespective of plan type (public or private). The requirement here is that a person who is querying data should be able to see all PUBLIC succession plans and only those PRIVATE plans which are defined by the user.

This can be achieved using below SQL:

select * from HRM_PLANS hp
 where hp.ACCESS_TYPE_CODE = 'PUBLIC'
UNION
select * from HRM_PLANS hp
 where hp.ACCESS_TYPE_CODE = 'PRIVATE'
   and exists (select 1 from HRM_PLAN_OWNERS hpo
                where person_id = hrc_session_util.get_user_personid
				  and hpo.plan_id = hp.plan_id)
BIP – Query to extract Collective Agreement Detials
SELECT pcafv.collective_agreement_name
      ,TO_CHAR(pcafv.effective_start_date,'YYYY/MM/DD') Start_Date
      ,pcafv.legislation_code
      ,pcafv.status
      ,pcafv.identification_code
      ,pcafv.description
      ,pcafv.comments
      ,houf_union.name Union_Name
      ,pcafv.bargaining_unit_code
      ,ple.name Legal_Employer
      ,pcafv.employee_org_name
      ,pcafv.employee_org_contact
      ,pcafv.employer_org_name
      ,pcafv.employer_org_contact
  FROM per_col_agreements_f_vl pcafv
      ,hr_all_organization_units_f_vl houf_union
      ,per_legal_employers ple
 WHERE pcafv.legal_entity_id = ple.organization_id(+)
   AND pcafv.union_id = houf_union.organization_id(+)
   AND trunc(SYSDATE) BETWEEN pcafv.effective_start_date AND pcafv.effective_end_date
   AND trunc(SYSDATE) BETWEEN houf_union.effective_start_date(+) AND houf_union.effective_end_date(+)
   AND trunc(SYSDATE) BETWEEN ple.effective_start_date(+) AND ple.effective_end_date(+)

BIP – Query to extract Calendar Event details

Calendar events are used in HCM to capture public holidays available to employees based on their location or department.

Below SQL queries can be used to extract the list of configured calendar events from backend table:

List of all calendar events configured:

SELECT pcetl.name
      ,pce.short_code
      ,pcetl.description
      ,pce.category
      ,pce.coverage_type 
      ,pce.start_date_time
      ,pce.end_date_time 
      ,pce.tree_code
      ,pce.tree_structure_code 
 FROM per_calendar_events pce,
      per_calendar_events_tl pcetl
WHERE pce.calendar_event_id=pcetl.calendar_event_id
  AND pcetl.LANGUAGE=userenv('LANG')

List of calendar events by location:

SELECT hla.location_id
      ,hla.location_name             
      ,hla.internal_location_code 
      ,pcet.name
	  ,pce.short_code
      ,pcet.description
      ,pce.category
      ,pce.coverage_type	  
	  ,pce.start_date_time
	  ,pce.end_date_time
      ,pce.tree_code
      ,pce.tree_structure_code 	  
 FROM hr_locations_all hla
     ,per_calendar_events pce
	 ,per_calendar_events_tl pcet
	 ,per_cal_event_coverage pcec
	 ,per_geo_tree_node_rf pgtnr
WHERE hla.geo_hierarchy_node_value = pgtnr.ancestor_pk1_value
  AND pgtnr.tree_structure_code = 'PER_GEO_TREE_STRUCTURE'
  AND pgtnr.distance in (0)
  AND pgtnr.ancestor_tree_node_id = pcec.tree_node_id
  AND pcec.calendar_event_id = pcet.calendar_event_id
  AND pcet.calendar_event_id = pce.calendar_event_id
  AND pcet.language = userenv('lang')
  AND pcec.coverage_flag in ('I')
  AND trunc(sysdate) between hla.effective_start_date and hla.effective_end_date
HDL – Sample HDL to load HCM Group Members

Oracle provides HCMGroup.dat which can be used to assign members against HCM Group. However, the business object has a major limitation (till release version 22A) that it can be used only for below two seeded Oracle HCM Groups:

  • Manually Maintained Excluded Members
  • Manually Maintained Audit Members

Below is the sample HDL file to add members to Audit HCM Group:

METADATA|HCMGroup|GrpMemberId|EffectiveStartDate|GrpId|MemberId|GroupCode|PersonNumber|SourceSystemOwner|SourceSystemId|GUID|EffectiveEndDate
MERGE|HCMGroup||2022/01/04|||MANUALLY MAINTAINED AUDIT MEMBERS|1014|HRC_SQLLOADER|MANUALLY MAINTAINED AUDIT MEMBERS_1014||

For any custom HCM Groups, this can’t be used. If you try to use any custom Group Code in the HDL file, you will receive below error:

Valid values for attribute GROUP CODE: MANUALLY MAINTAINED AUDIT MEMBERS,MANUALLY MAINTAINED EXCLUDED MEMBERS.

Query to extract members of an HCM Group:

SELECT hgt.group_name
      ,papf.person_id
      ,papf.person_number
      ,ppnf.full_name
      ,ppnf.last_name
      ,ppnf.first_name
      ,hgmf.effective_start_date
      ,hgmf.effective_end_date
  FROM PER_ALL_PEOPLE_F papf,
       PER_PERSON_NAMES_F ppnf,
       HWM_GRP_MEMBERS_F hgmf,
       HWM_GRPS_B hgb,
       HWM_GRPS_TL hgt
 WHERE hgb.grp_id = hgmf.grp_id
   AND hgb.grp_id = hgt.grp_id 
   AND hgt.language = 'US'
   AND papf.person_id = hgmf.member_id
   AND papf.person_id = ppnf.person_id
   AND ppnf.name_type ='GLOBAL'
   AND trunc(sysdate) BETWEEN hgmf.effective_start_date and hgmf.effective_end_date
   AND trunc(sysdate) BETWEEN ppnf.effective_start_date and ppnf.effective_end_date
ORDER BY 1,3   
HDL – Talent Profile HDL related useful queries

Loading the profile data against a worker/job/position is little tricky one. One has to understand the templates thoroughly before uploading any data.

Below are some of the most commonly used queries to extract some of the required information for loading Talent profiles data.

Extract Person Profile Code:

SELECT distinct papf.person_id, papf.person_number,ProfilePEO.profile_code
  FROM HRT_PROFILES_VL ProfilePEO, PER_ALL_PEOPLE_F papf
 WHERE papf.person_id = ProfilePEO.person_id
    AND TRUNC(SYSDATE) BETWEEN papf.effective_start_date and papf.effective_end_date

Extract Profile Type Id:

Select * from HRT_PROFILE_TYPES_TL
BIP – Extract environment url dynamically

For many BIP reports, it is required to publish the environment url. One can easily hardcode it in SQL query, however, when P2T or T2T happens, the same url gets copied to target instance which is incorrect. The better way of doing it is to use below piece of code in custom SQL to extract environment url dynamically:

SELECT 'https://'||EXTERNAL_VIRTUAL_HOST INSTANCE_NAME
  FROM FUSION.ASK_DEPLOYED_DOMAINS
 WHERE DEPLOYED_DOMAIN_NAME = 'FADomain'
BIP – Extract Value Set values Information
SELECT
    fvvs.value_set_code
   ,fvvv.value
   ,fvvv.description
   ,fvvv.enabled_flag
   ,to_char(fvvv.start_date_active,'YYYY/MM/DD') start_date_active
   ,to_char(fvvv.end_date_active,'YYYY/MM/DD') end_date_active
   ,fvvv.summary_flag
   ,fvvv.translated_value
   ,fvvv.independent_value
   ,fvvv.independent_value_number
   ,to_char(fvvv.independent_value_date,'YYYY/MM/DD') independent_value_date
   ,fvvv.independent_value_timestamp
   ,fvvv.value_number
   ,to_char(fvvv.value_date,'YYYY/MM/DD') value_date
   ,fvvv.value_timestamp
   ,fvvv.sort_order 
FROM
    fnd_vs_values_vl       fvvv,
    fnd_vs_value_sets      fvvs
WHERE    fvvv.value_set_id = fvvs.value_set_id
order by fvvs.value_set_code
BIP – Extract Salary basis with associated element details
  SELECT csb.name
		,csb.salary_basis_code
		,csb.salary_annualization_factor
		,pld.name LDG
		,petft.element_name
		,pivt.name input_value_name
		,rate.name rate_name
    FROM cmp_salary_bases csb,
	     per_legislative_data_groups_tl pld,
	     pay_element_types_tl petft,
	     pay_input_values_f pivf,
	     pay_input_values_tl pivt,
	     per_rates_f_tl rate
   WHERE 1=1
     and csb.legislative_data_group_id = pld.legislative_data_group_id
     and pld.language = USERENV('LANG')
     and csb.element_type_id = petft.element_type_id
    -- and trunc(sysdate) between petft.effective_start_date and petft.effective_end_date
     and petft.language = USERENV('LANG')
     and csb.element_type_id = pivf.element_type_id
     and csb.input_value_id = pivf.input_value_id
     and trunc(sysdate) between pivf.effective_start_date and pivf.effective_end_date
     and pivf.input_value_id = pivt.input_value_id
     and pivt.language = USERENV('LANG')
     and csb.grade_rate_id = rate.rate_id(+)
     and NVL(rate.language,USERENV('LANG')) = USERENV('LANG')
ORDER BY 1	 
BIP – Query to DELETE Salary data using HDL

There may be a requirement to delete salary data while doing HDL iterations for the first time. You can use below query for the same purpose

Select 'METADATA|Salary|AssignmentNumber|SalaryAmount|DateFrom|DateTo|SalaryBasisId|SalaryId' Header, 1 data_flow_order
from dual
UNION
SELECT 'DELETE|Salary'||'|'||
paam.assignment_number||'|'||
SALARY_AMOUNT||'|'||
TO_CHAR(cs.date_from,'RRRR/MM/DD', 'nls_date_language=American')||'|'||
TO_CHAR(cs.date_to,'RRRR/MM/DD', 'nls_date_language=American')||'|'||
cs.salary_basis_id||'|'||
cs.salary_id data_row,
2 data_flow_order
FROM cmp_salary cs,
per_all_assignments_m paam
WHERE cs.assignment_id= paam.assignment_id
AND trunc(sysdate) between paam.effective_start_date AND paam.effective_end_date
AND paam.assignment_type in ('E', 'C', 'P')
AND trunc(cs.creation_date) < trunc(sysdate)-1
ORDER BY data_flow_order
HDL – Query to DELETE Assigned Payroll data from Fusion

In case, you need to delete assigned payroll data for masking purpose in lower environment or for some other reason like correction etc, use below query to pull the data in HDL format:

SELECT 'METADATA|AssignedPayroll|AssignedPayrollId|EffectiveStartDate|AssignmentId|LegislativeDataGroupId|PayrollId|PayrollTermId|SourceSystemId|SourceSystemOwner' HEADER, 1 DATA_ORDER
  FROM DUAL
UNION ALL
SELECT 'DELETE|AssignedPayroll'
	   ||'|'||
       papd.ASSIGNED_PAYROLL_ID
	   ||'|'||
       to_char(papd.start_date,'RRRR/MM/DD')
	   ||'|'||	   
	   paam.assignment_id
	   ||'|'||	   
	   papd.LEGISLATIVE_DATA_GROUP_ID
	   ||'|'||	   
	   papd.Payroll_Id
	   ||'|'||	   
	   papd.Payroll_Term_Id
	   ||'|'||	   
       hikm.source_system_id
	   ||'|'||	   
       hikm.source_system_owner HEADER, 2 DATA_ORDER
  FROM pay_assigned_payrolls_dn papd
	  ,pay_payroll_terms ppt
	  ,pay_pay_relationships_dn prrd
	  ,pay_all_payrolls_f papf_pay
	  ,pay_rel_groups_dn prgd
	  ,pay_rel_groups_f prgf
	  ,per_all_people_f papf
	  ,per_person_names_f ppnf
	  ,per_all_assignments_m paam 
	  ,hrc_integration_key_map hikm
 WHERE papd.payroll_term_id = ppt.payroll_term_id
   AND ppt.payroll_relationship_id = prrd.payroll_relationship_id
   AND papd.payroll_id = papf_pay.payroll_id
   AND prrd.person_id = papf.person_id
   AND prrd.payroll_relationship_id = prgd.payroll_relationship_id
   AND prgd.relationship_group_id = prgf.relationship_group_id
   AND prgd.assignment_id = paam.assignment_id
   AND paam.effective_latest_change = 'Y'
   AND prgd.group_type = 'A'
   AND ppnf.person_id = papf.person_id
   AND ppnf.name_type = 'GLOBAL'
   AND TRUNC(SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
   AND TRUNC(SYSDATE) BETWEEN ppnf.effective_start_date AND ppnf.effective_end_date
   AND TRUNC(SYSDATE) BETWEEN prgf.effective_start_date AND prgf.effective_end_date
   AND TRUNC(SYSDATE) BETWEEN paam.effective_start_date AND paam.effective_end_date
   AND hikm.surrogate_id = papd.ASSIGNED_PAYROLL_ID
   AND trunc(papd.creation_date) < trunc(sysdate)-1
   --AND papf.person_number = '11111'
 ORDER BY 2

Sample HDL File:

METADATA|AssignedPayroll|AssignedPayrollId|EffectiveStartDate|AssignmentId|LegislativeDataGroupId|PayrollId|PayrollTermId|SourceSystemId|SourceSystemOwner
DELETE|AssignedPayroll|300000012343876|2022/01/19|300000046430123|300000004188123|300000014605111|300000046430342|ASGN_PAYROLL_12345|EBS-HR
BIP – Query to extract Rating Model Levels in HDL Format

This is second post in a series to extract Rating Model data from Fusion environment in HDL format. The first post has a query to extract only the rating model data and second post describes the BIP query to extract rating model levels data.

Query to extract Rating Model data:

Query to extract Rating Model levels data:

SELECT 'METADATA|RatingLevel|RatingModelCode|RatingLevelCode|CareerStrDev|RatingDescription|DateFrom|FromPoints|MaximumRatingDistribution|MinimumRatingDistribution|NumericRating|ReviewPoints|ReviewRatingDescr|RatingShortDescr|StarRating|DateTo|ToPoints|SourceSystemId|SourceSystemOwner' DATA_LINE, 1 ORDER_NUM
  FROM DUAL
UNION
select
'MERGE'
||'|'||
'RatingLevel'
||'|'||
hrmb.rating_model_code
||'|'||
hrlv.rating_level_code
||'|'||
hrlv.career_str_dev
||'|'||
hrlv.rating_description
||'|'||
TO_CHAR(hrlv.date_from,'YYYY/MM/DD')
||'|'||
hrlv.from_points
||'|'||
hrlv.max_rating_distribution
||'|'||
hrlv.min_rating_distribution
||'|'||
hrlv.numeric_rating
||'|'||
hrlv.review_points
||'|'||
hrlv.review_rating_descr
||'|'||
hrlv.rating_short_descr
||'|'||
hrlv.star_rating
||'|'||
TO_CHAR(hrlv.date_to,'YYYY/MM/DD')
||'|'||
hrlv.to_points
||'|'||
hikm.source_system_id
||'|'||
hikm.source_system_owner DATA_LINE, 2 ORDER_NUM
FROM hrt_rating_models_b hrmb
   , hrt_rating_levels_vl hrlv
   , hrc_integration_key_map hikm
WHERE hrmb.rating_model_id = hrlv.rating_model_id
  AND hikm.surrogate_id = hrlv.rating_level_id
BIP – Query to extract Rating Model data in HDL Format

You can use below query to extract Rating Model data from fusion environment for updates.

SELECT 'METADATA|RatingModel|RatingModelCode|DateFrom|DateTo|RatingName|RatingDescription|SourceSystemId|SourceSystemOwner' DATA_LINE, 1 ORDER_NUM
  FROM DUAL
UNION
select
'MERGE'
||'|'||
'RatingModel'
||'|'||
hrmb.rating_model_code
||'|'||
TO_CHAR(hrmb.date_from,'YYYY/MM/DD')
||'|'||
TO_CHAR(hrmb.date_to,'YYYY/MM/DD')
||'|'||
rtl.rating_name
||'|'||
rtl.rating_description
||'|'||
hikm.source_system_id
||'|'||
hikm.source_system_owner  DATA_LINE, 2 ORDER_NUM
FROM hrt_rating_models_b hrmb,
     hrt_rating_models_tl rtl,
     hrc_integration_key_map hikm
WHERE hrmb.rating_model_id = rtl.rating_model_id
  AND hikm.surrogate_id = hrmb.rating_model_id
  AND rtl.language = 'US'
ORDER BY ORDER_NUM

Save the data as RatingModel.dat before uploading to Fusion. You can make use of this query to extract data for reference or use it as a method to extract data from one instance and upload it to a different instance.

Check below post for sample RatingModel HDL file:

https://fusionhcmconsulting.com/2022/03/hdl-sample-file-to-upload-rating-model/

HDL – Sample HDL to Update User Table value

SQL Query to extract data for a particular value of a user table:

SELECT DATA
FROM (
SELECT 'METADATA|UserDefinedTableColumnInstance|EffectiveEndDate|EffectiveStartDate|LegislativeDataGroupId|UserColumnId|UserColumnInstanceId|UserRowId|Value|SourceSystemOwner|SourceSystemId|UserTableId' DATA, 1 DATA_ROW
FROM DUAL
UNION ALL
SELECT 
'MERGE'
||'|'||
'UserDefinedTableColumnInstance'
||'|'||
to_char(fuci.EFFECTIVE_END_DATE,'YYYY/MM/DD')
||'|'||
to_char(fuci.EFFECTIVE_START_DATE,'YYYY/MM/DD')
||'|'||
fuci.LEGISLATIVE_DATA_GROUP_ID
||'|'||
fuci.USER_COLUMN_ID
||'|'||
fuci.USER_COLUMN_INSTANCE_ID
||'|'||
fuci.USER_ROW_ID
||'|'||
fuci.Value
||'|'||
hikm.source_system_owner  
||'|'||
hikm.source_system_id
||'|'||
fuc.USER_TABLE_ID
 DATA, 2 DATA_ROW
FROM 
FF_USER_COLUMN_INSTANCES_F fuci,
FF_USER_COLUMNS fuc,
FF_USER_TABLES ft,
HRC_INTEGRATION_KEY_MAP hikm
WHERE fuci.USER_COLUMN_ID=fuc.USER_COLUMN_ID
AND fuc.USER_TABLE_ID=ft.USER_TABLE_ID
AND hikm.surrogate_id = fuci.USER_COLUMN_INSTANCE_ID
AND ft.BASE_USER_TABLE_NAME = 'TEST_WAGE'
) ORDER BY DATA_ROW

Sample HDL:

METADATA|UserDefinedTableColumnInstance|EffectiveEndDate|EffectiveStartDate|LegislativeDataGroupId|UserColumnId|UserColumnInstanceId|UserRowId|Value|SourceSystemOwner|SourceSystemId|UserTableId

MERGE|UserDefinedTableColumnInstance|4712/12/31|1951/01/01|300000046974912|300000175181219|300000175188123|300000175187989|5.20|FUSION|300000175144123|300000175181333