Search for:
BIP – Extract Union Details from Assignment
SELECT paam.assignment_number
      ,paam.labour_union_member_flag
      ,houft.name
  FROM per_all_assignments_m paam
      ,hr_organization_units_f_tl houft
 WHERE paam.assignment_type ='E'
   AND paam.union_id =  houft.organization_id
   AND houft.language = 'US'
   AND paam.effective_latest_change = 'Y'
   AND TRUNC(SYSDATE) BETWEEN paam.effective_start_date AND paam.effective_end_date
   AND TRUNC(SYSDATE) BETWEEN houft.effective_start_date AND houft.effective_end_date
HDL – Sample File to DELETE documents of Record

Use the below query to extract document of record data which should be deleted. You can add additional filters by adding person numbers.

select 'DELETE' 					"METADATA"
      ,'DocumentsOfRecord'			"DocumentsOfRecord"
	  ,hdor.documents_of_record_id	"DocumentsOfRecordId"
	  ,hdor.document_type_id	    "DocumentTypeId"
	  ,hdor.document_code			"DocumentCode"
	  ,hdor.document_name			"DocumentName"
	  ,hdor.document_number			"DocumentNumber"
	  ,hikm.source_system_owner		"SourceSystemOwner"
	  ,hikm.source_system_id		"SourceSystemId"
  from HR_DOCUMENTS_OF_RECORD hdor
      ,HRC_INTEGRATION_KEY_MAP hikm
 where hdor.documents_of_record_id = hikm.surrogate_id
   and hdor.documents_of_record_id = 300000217125443

Query for Person Number and Document Type:

select DISTINCT 'DELETE' 					"METADATA"
      ,'DocumentsOfRecord'			"DocumentsOfRecord"
	  ,hdor.documents_of_record_id	"DocumentsOfRecordId"
	  ,hdor.document_type_id	    "DocumentTypeId"
	  ,hdor.document_code			"DocumentCode"
	  ,hdor.document_name			"DocumentName"
	  ,hdor.document_number			"DocumentNumber"
	  ,hikm.source_system_owner		"SourceSystemOwner"
	  ,hikm.source_system_id		"SourceSystemId"
  from HR_DOCUMENTS_OF_RECORD hdor
      ,HRC_INTEGRATION_KEY_MAP hikm
	  ,PER_ALL_PEOPLE_F papf
	  ,HR_DOCUMENT_TYPES_TL hdtt
 where hdor.documents_of_record_id = hikm.surrogate_id
   --and hdor.documents_of_record_id = 300000217125443
   and hdor.person_id = papf.person_id
   and TRUNC(SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
   and hdor.document_type_id = hdtt.document_type_id
   and hdtt.language = 'US'
   and hdtt.DOCUMENT_TYPE = 'Test Doc'
   and papf.person_number IN ('12','23')

Sample HDL File:

METADATA|DocumentsOfRecord|DocumentsOfRecordId|DocumentTypeId|DocumentCode|DocumentName|DocumentNumber|SourceSystemOwner|SourceSystemId
DELETE|DocumentsOfRecord|300000217125443|300000217168555|TMAD_001|Multiple Attachments||HRC_SQLLOADER|HRC_SQLLOADER_101_TMAD_001
BIP – Query to extract logged in person details
select papf.person_number, 
       ppnf.full_name
  from per_All_people_f papf
      ,per_person_names_f ppnf
 where papf.person_id =ppnf.person_id
   and ppnf.name_type= 'GLOBAL'
   and trunc(sysdate) between papf.effective_start_date and papf.effective_end_date
   and trunc(sysdate) between ppnf.effective_start_date and ppnf.effective_end_date
   and papf.person_id = HRC_SESSION_UTIL.GET_USER_PERSONID
BIP – Query to extract Content Items Details
SELECT  'METADATA'
       ,'ContentItem'
	   ,hctb.content_type_id
       ,hctb.context_name
       ,hcibt.name
       ,hctvt.value_set_name
       ,hg.geography_name
       ,hcibt.item_description
       ,to_char(hcib.date_from,'yyyy/mm/dd')
       ,hg.geography_code
       ,hcib.content_item_code
       ,to_char(hcib.date_to,'yyyy/mm/dd')
       ,hrmb.rating_model_code
       ,hrmb.rating_model_id
       ,hikm.source_system_id
       ,hikm.source_system_owner 
FROM HRT_CONTENT_ITEMS_B hcib
    ,HRT_CONTENT_ITEMS_TL hcibt
	,HRT_CONTENT_TYPES_B hctb
	,HZ_GEOGRAPHIES hg
	,HRT_RATING_MODELS_B hrmb
	,HRT_CONTENT_TP_VALUESETS_TL hctvt
	,HRC_INTEGRATION_KEY_MAP hikm
where hcib.CONTENT_ITEM_ID=hcibt.CONTENT_ITEM_ID
  AND hcibt.LANGUAGE=userenv('LANG')
  AND hcib.CONTENT_TYPE_ID=hctb.CONTENT_TYPE_ID(+)
  AND hikm.surrogate_id = hcib.CONTENT_ITEM_ID
  AND hcib.COUNTRY_ID=hg.GEOGRAPHY_ID(+)
  AND hg.GEOGRAPHY_TYPE(+)='COUNTRY'
  AND trunc(hcib.DATE_FROM) between hg.START_DATE(+) and nvl(hg.END_DATE(+),to_date('4712/12/31','YYYY/MM/DD'))
  AND hcib.RATING_MODEL_ID=hrmb.RATING_MODEL_ID(+)
  AND trunc(hcib.DATE_FROM) between hrmb.DATE_FROM(+) and nvl(hrmb.DATE_TO(+),to_date('4712/12/31','YYYY/MM/DD'))
  AND hcib.CONTENT_VALUE_SET_ID=hctvt.CONTENT_VALUE_SET_ID(+)
  AND hctvt.LANGUAGE(+)=userenv('LANG')
  AND hctb.CONTEXT_NAME = 'LANGUAGE'
BIP – Extract Supervisor HDL details
SELECT a.datarow
  FROM (
SELECT 'METADATA|AssignmentSupervisor|AssignmentSupervisorId|AssignmentNumber|ManagerAssignmentNumber|ManagerPersonNumber|ManagerType|AssignmentId|EffectiveEndDate|EffectiveStartDate|ActionCode|ManagerId|PersonId|PrimaryFlag|ReasonCode|ManagerAssignmentId|SourceSystemId|SourceSystemOwner' DATAROW, 1 SEQUENCE
   FROM DUAL
UNION ALL
SELECT   DISTINCT 'MERGE'
         || CHR (124)
		 ||'AssignmentSupervisor'
		 || CHR (124)
		 ||NULL
		 || CHR (124)
		 ||paaf.assignment_number
		 || CHR (124)
         ||paaf_m.assignment_number
		 || CHR (124)
		 ||papf.person_number 
		 || CHR (124)
         ||pasup.manager_type
         || CHR (124)
		 ||NULL		 
         || CHR (124)		 
		 ||TO_CHAR (pasup.effective_end_date,'DD/MON/YYYY')
		 || CHR (124)
         ||TO_CHAR (pasup.effective_start_date,'DD/MON/YYYY')
         || CHR (124)         
	     ||pab.action_code
		 || CHR (124)         
	     ||NULL
		 || CHR (124)         
	     ||NULL
		 || CHR (124)         
	     ||pasup.primary_flag
		 || CHR (124)         
	     ||parb.action_reason_code
		 || CHR (124)         
	     ||NULL
		 || CHR (124)        
         ||hikm.source_system_id
		 || CHR (124)         		 
         ||hikm.source_system_owner DATAROW, 2 SEQUENCE
    FROM per_all_people_f papf,
         per_assignment_supervisors_f pasup,
         per_all_assignments_m paaf,
         per_all_assignments_m paaf_m,
		 hrc_integration_key_map hikm,
		 per_action_occurrences pao,
		 per_action_reasons_b parb,
		 per_actions_b pab
   WHERE pasup.assignment_id = paaf.assignment_id
     AND pasup.manager_assignment_id = paaf_m.assignment_id
     AND papf.person_id = paaf_m.person_id
     AND paaf.assignment_type = 'E'
     --AND paaf.assignment_status_type = 'ACTIVE'
     AND TRUNC (SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
     AND pasup.effective_start_date BETWEEN paaf.effective_start_date AND paaf.effective_end_date
     AND pasup.effective_start_date BETWEEN paaf_m.effective_start_date AND paaf_m.effective_end_date
     AND pasup.assignment_supervisor_id = hikm.surrogate_id
     AND OBJECT_NAME  = 'AssignmentSupervisor'
     AND pao.action_occurrence_id = pasup.action_occurrence_id
     AND pao.action_id = pab.action_id
     AND pao.action_reason_id = parb.action_reason_id	 
) a 
ORDER BY SEQUENCE
BIP – Extract Worker Contact Details
SELECT DISTINCT papf.person_number	"Emp Person Number"
               ,papf_cont.person_number	"Contact Person Number"
			   ,hl.meaning 			"Relationship"
			   ,TO_CHAR (papf_cont.effective_start_date,'RRRR/MM/DD') "Contact Eff Start D"
			   ,TO_CHAR (papf_cont.effective_end_date,'RRRR/MM/DD') "Contact Eff End D"
			   ,TO_CHAR (papf_cont.start_date, 'RRRR/MM/DD') "Contact Start Date"
			   ,TO_CHAR (pcrf.effective_start_date,'RRRR/MM/DD') "Rel Start Date"
			   ,TO_CHAR (pcrf.effective_end_date,'RRRR/MM/DD') "Rel End Date"
			   ,TO_CHAR (pp.date_of_birth, 'RRRR/MM/DD') "Contact Person DOB"
			   ,ppnf.first_name	"Contact First Name"
			   ,ppnf.last_name 	"Contact Last Name"
          FROM per_contact_relships_f pcrf
		      ,per_all_people_f papf_cont
			  ,per_all_people_f papf
			  ,per_persons pp
			  ,per_person_names_f ppnf
			  ,hcm_lookups hl
         WHERE papf_cont.person_id = pcrf.contact_person_id
		   AND papf.person_id = pcrf.person_id
           AND pp.person_id = pcrf.contact_person_id
           AND ppnf.person_id = pcrf.contact_person_id
           AND ppnf.name_type = 'GLOBAL'
           AND hl.lookup_code = pcrf.contact_type
           AND hl.lookup_type = 'CONTACT'
      ORDER BY papf.person_number, hl.meaning
HDL – Update Goal Weight

Goal Weightage can be updated using GoalMeasurement metadata in Goal business object. First, we need to extract the uploaded measurements. Use the below query to extract the details:

SELECT HDL_DATA
FROM (
SELECT 'METADATA|GoalMeasurement|MeasurementId|MeasurementName|StartDate|EndDate|TargetValue|TargetPercentage|UomCode|MeasureTypeCode|TargetType|GoalId|SourceSystemId|SourceSystemOwner' HDL_DATA, 1 SEQ
  FROM DUAL
UNION ALL
SELECT 'MERGE'
     || CHR (124)
     || 'GoalMeasurement'
     || CHR (124)	 
	 || hgm.measurement_id
     || CHR (124)
	 || hgm.measurement_name
     || CHR (124)
	 || TO_CHAR(hgm.start_date,'RRRR/MM/DD')
     || CHR (124)
	 || TO_CHAR(hgm.end_date,'RRRR/MM/DD')
     || CHR (124)
	 || hgm.target_value
     || CHR (124)
	 || hgm.target_percentage
     || CHR (124)
	 || hgm.uom_code
     || CHR (124)
	 || hgm.measure_type_code
     || CHR (124)
	 || hgm.target_type
     || CHR (124)
	 || hg.goal_id
     || CHR (124)
	 || hikm.source_system_id
     || CHR (124)
	 || hikm.source_system_owner HDL_DATA, 2 SEQ
  FROM hrg_goals hg
      ,hrg_goal_measurements hgm
      ,hrc_integration_key_map hikm
 WHERE hg.goal_id = hgm.goal_id
   AND hgm.measurement_id = hikm.surrogate_id
   AND hgm.uom_code = 'PERCENT'
   --AND hg.goal_id = 300000215856607
)
ORDER BY SEQ

Save the data in Goal.dat HDL file. Update the target value as required. Save the .dat file and upload.

Sample HDL:

METADATA|GoalMeasurement|MeasurementId|MeasurementName|StartDate|EndDate|TargetValue|TargetPercentage|UomCode|MeasureTypeCode|TargetType|GoalId|SourceSystemId|SourceSystemOwner
MERGE|GoalMeasurement|300000215856608|Customer Survey Results|2019/01/01|2019/12/31|80||PERCENT|QUANTITATIVE|MIN|300000215856607|300000215856608|FUSION
BIP – Providing Link of BIP report on Self Service

There is one common requirement where Employees should access reports directly from self-service instead of navigating to analytics. In such cases, a report link can be created and added as a static link on Navigator. This will enable the employee to access report directly. The report can be then viewed in different views using different parameters in report link.

Steps to enable report link on self-service:

  • Derive the report link. Navigate to analytics and open the report:
    • https://xxxx-fa-ext.oracledemos.com/analytics/

Prepare the report link in below format:

https://xxxx-fa-ext.oracledemos.com/xmlpserver/Custom/HCM/INVOKE_BIP_RPT.xdo?_xpt=0&_xmode=4&_xf=pdf&_xautorun=true

Refer the below link for more details on link parameters

https://docs.oracle.com/cd/E80149_01/bip/BIPRD/GUID-F788B306-99F6-432E-BCD5-F45046D31684.htm#BIPRD3324

Xmode parameter is used for different views. Below are different views depending upon xmode value:

xmode=0

xmode=1

xmode=2

xmode=4

Passing Parameter in report link:

https://ucf6-zuyv-fa-ext.oracledemos.com/xmlpserver/Custom/HCM/INVOKE_BIP_RPT.xdo?_xpt=0&_xmode=4&p_emp_num=10&_xt=Simple&_xf=html

  • Create a new Sandbox and add Structure under tools. Click on Create and Enter:

Click on Tools under sandbox and click on Create Page Entry:

Create a page entry with below details and click on Save and Close:

You can now see Report Link under Me:

Click on report link and it will open the report in a new tab:

If everything looks ok, you can publish the Sandbox.

BIP – My Public Info Extract

Each person can publish public info in HCM Cloud by navigating to Me -> Quick Actions -> Public Info. There are various sections under Public Info like Public Message, Contact Info, About Me etc.

You can use the below SQL queries to extract the data for relevant sections.

Public Message:

SQL Query:

SELECT papf.person_number
      ,ppnf.full_name
      ,ppmv.PORTRAIT_MESSAGE_CONTENT
      ,ppmv.TIME_FROM
      ,ppmv.TIME_TO
  FROM per_all_people_f papf
      ,per_person_names_f ppnf
      ,per_portrait_messages_vl ppmv
 WHERE papf.person_id = ppnf.person_id
   AND ppnf.name_type = 'GLOBAL'
   AND papf.person_id = ppmv.target_person_id
   AND TRUNC(SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
   AND TRUNC(SYSDATE) BETWEEN ppnf.effective_start_date AND ppnf.effective_end_date

About Me:

SQL Query:

select papf.person_number
      ,ppnf.full_name
      ,hpv.summary about_me
      ,hpkaoe.keywords area_of_expertise
      ,regexp_replace(regexp_replace(regexp_replace(hpkaoe.KEYWORDS, '</p><br/>'), '</p>'),'<p>') as Area_Of_Expert
ise_wo_html
      ,hpkaoi.keywords area_of_interest
      ,regexp_replace(regexp_replace(regexp_replace(hpkaoi.KEYWORDS, '</p><br/>'), '</p>'),'<p>') as area_of_interest_wo_html
 from PER_ALL_PEOPLE_F papf,
      PER_PERSON_NAMES_F ppnf,
      HRT_PROFILES_VL hpv,
      HRT_PROFILE_KEYWORDS hpkaoe,
      HRT_PROFILE_KEYWORDS hpkaoi
where hpv.profile_id = hpkaoe.profile_id
  and hpkaoe.profile_id = hpkaoi.profile_id
  and hpkaoe.keyword_type = 'AOE'
  and hpkaoi. keyword_type = 'AOI'
  and papf.person_id = hpv.person_id
  and papf.person_id = ppnf.person_id
  and ppnf.name_type = 'GLOBAL'
  and TRUNC(SYSDATE) BETWEEN papf.effective_start_date AND papf.effective_end_date
  and TRUNC(SYSDATE) BETWEEN ppnf.effective_start_date AND ppnf.effective_end_date
BIP – Query to find attachment details

There are multiple objects in Oracle HCM Cloud which support attachments. Often, there is a need to extract the attachment details stored at the object level. All the uploaded attachments are stored in fnd_attached_documents table. Below query is used to extract the attachment details stored at Service Request level in HR Helpdesk. You can replace the table name and primary key join to extract the data as per your requirement:

select ssr.sr_id,
       ssr.title,  
       fad.CATEGORY_NAME ,
       fdt.file_name,
       fdt.dm_version_number document_id,
       fdt.dm_document_id UCM_file
  from fnd_attached_documents fad, svc_service_requests ssr, fnd_documents_tl fdt
 where ENTITY_NAME = 'SVC_SERVICE_REQUESTS'
   and ssr.sr_id = fad.PK1_VALUE
   and fad.document_id = fdt.document_id
   and fdt.language = 'US'

Query to get list of Jobs having an attachment:

 select pjft.name
   from per_jobs_f_tl pjft
  where pjft.language = 'US'
    and exists (select 1  
                  from fnd_attached_documents fad, PER_JOBS_F pjf, fnd_documents_tl fdt
                 where ENTITY_NAME = 'PER_JOBS_F'
                   and pjf.job_id = fad.PK1_VALUE
                   and fad.document_id = fdt.document_id
                   and fdt.language = 'US'
				   and pjf.job_id = pjft.job_id)

Refer below link for attachments on Position profile:

https://fusionhcmconsulting.com/2021/03/bip-reports-query-to-get-position-profile-attachment-details/

In order to download the attchments from UCM, the user should have AttachmentsRead role attached. Please check the below post on how to create AttachmentsRead role:

https://fusionhcmconsulting.com/2021/03/security-role-to-view-document-of-record-attachments-on-ucm/

Configuration – Value set to get Employee Category list

Employee category (EMPLOYEE_CATG) is a delivered user type lookup in Oracle HCM which can be extended. However, in few cases, there is a requirement to populate employee category at DFF attribute(s), in this case, we need to create a table defined value set.

FROM Clausefnd_lookup_values
Value Attributes Table Alias 
*Value Column Namesubstr(meaning,1,80)
Value Column TypeVARCHAR2
Value Column Length80
Description Column Namesubstr(meaning,1,100)
Description Column Type VARCHAR2
Description Column Length 80
ID Column Namesubstr(meaning,1,100)
ID Column TypeVARCHAR2
ID Column Length80
Enabled Flag Column Name 
Start Date Column Name 
End Date Column Name 
WHERE Clauselookup_type= ‘EMPLOYEE_CATG’ and TAG is NULL
and language = ‘US’
and enabled_flag = ‘Y’
ORDER BY Clause substr(meaning,1,100)
BIP – Query to get Position costing details
SELECT hapft.name
      ,pcaa.segment1 
      ,pcaa.segment2 
      ,pcaa.segment3 
      ,pcaa.segment4 
      ,pcaa.segment5 
      ,pcaa.segment6 
      ,pcaa.segment7 
      ,pcaa.segment8 
  FROM PAY_COST_ALLOCATIONS_F pacf
      ,PAY_COST_ALLOC_ACCOUNTS pcaa
	  ,HR_ALL_POSITIONS_F hapf
	  ,HR_ALL_POSITIONS_F_TL hapft
 WHERE pacf.cost_allocation_record_id = pcaa.cost_allocation_record_id 
   AND pacf.source_type = 'POS' 
   AND pacf.source_id = hapf.position_id
   AND hapft.position_id = hapf.position_id
   AND hapft.language = 'US'
   AND TRUNC(SYSDATE) BETWEEN pacf.effective_start_date AND pacf.effective_end_date
   AND TRUNC(SYSDATE) BETWEEN hapf.effective_start_date AND hapf.effective_end_date
   AND TRUNC(SYSDATE) BETWEEN hapft.effective_start_date AND hapft.effective_end_date
Extract – Loopback Inbound Extract

Often in Cloud HCM, we encounter situation(s) where we need to update some information at worker assignment, post worker creation as this information was not available at the time of hiring an worker. One such example could be Employee category. Let’s take a hypothetical example, employee category should be auto populated based on worker Job. As, there is no direct link between employee category and job, so it becomes a pain to manually search and put the correct employee category while hiring. So, in this case, the worker is hired with Job with no value for employee category.

A DFF is opened at Job level which store the corresponding employee category. So, in this case we design a solution which will:

  • Read the worker job and then the corresponding employee category from Job.
  • Generate the data for WorkTerms and Assignments METADATA in HCM Data Loader Format.
  • HCM Extract to consume the data and trigger HDL Import and Load Process.
  • Schedule HCM Extract to run daily or depending upon the requirement.

Once, HCM Extract is run, employee category will populated automatically.

Steps to design the integration:

  1. Extract the Workterms and assignment data for all workers where the job is populated and employee category is NULL.
  2. Create a BIP publisher report to organize the data extracted in Step 1 in HCM Data Loader format. Copy the Global Reports Data Model (from path /Shared Folders/Human Capital Management/Payroll/Data Models/globalReportsDataModel) to a folder in /Shared Folders/Custom/HR. This folder can be anything as per your nomenclature specifications.
  3. Add a new data set in the globalReportsDataModel and paste your query in the new data set.
  4. Check below post on sample query to extract the data in HCM Data Loader format: https://fusionhcmconsulting.com/2021/01/hdl-updating-an-attribute-on-assignment/
  5. Create an etext template (sample RTF template is available to above link) and a report based on above data model.
  6. Global Data Model will look like (G_3 is new data set):-

Steps to create HCM Extract:

You can import the below sample extract in your environment:

  • Download the above file. Change the extension to xml.
  • Open the xml file with Notepad or Notepad++ and remove first two rows (these rows were added to make sure the file is uploaded here).
  • Navigate to My Client Groups -> Data Exchange -> HCM Extracts -> Extract Definitions:
  • Click on Import to import the xml file
  • Provide an Extract name. Uncheck the Changes Only checkbox and click on Ok:
  • Once the extract Import is complete, Click on pencil icon to edit:
  • Click on ‘Extract Delivery Option’ in navigation Tree on left side. And on the right side, Under ‘Extract Delivery Options’ click on edit to update the path of your report as created earlier. It should like – /Custom/HR/AssignmentUpdateRPT.xdo
  • Make sure default value for parameter Auto Load is set “Y”.
  • Save the details. Click on Extract Execution Tree next and Click All Formula:
  • Once the formulas are complied, then click on Submit button.

The next step is to refine the extract in order to Submit the Import and Load process:

  • Navigate to My Client Groups -> Data Exchange -> HCM Extracts -> Refine Extracts. Search the extract and click on edit.
  • Select and Add – Initiate HCM Data Loader process
  • Click on Go Task for “Initiate HCM Data Loader” and Click Edit for “ Data Loader Archive Action” and add the relevant parameters:

Parameter Basis – Bind to Flow Task
Basis Value – XX Assignment Update Integration, Submit , Payroll Process

  • Click Edit for “Data Loader Configurations” add relevant parameters

Parameter Basis – Constant Bind
Basis Value -ImportMaximumErrors=100,LoadMaximumErrors=100,LoadConcurrentThreads=8,LoadGroupSize=100

  • Task sequence should look as follows:
  • Go to Review and click on Submit.

Your extract is now ready for submission. You can submit the extract and test it.