In the enhanced profiles (Profile V2), for bulk upload, one has to use TalentProfile.dat file as earlier. However while loading profile items against a profile, a new attribute SectionId should be provided. SectionId is based on the content item being used.
Oracle Learning Cloud supports to define learning outcomes against each course as shown in below fig:
For reporting and integration purposes, there is a need to extract the learning outcomes assigned against each course. Learning outcomes are stored as profile relationship against each course. Below query can be used to extract the asked data:
select wlifv.learning_item_number,
wlifv.name learning_item_name,
wlifv.learning_item_type,
wlifv.status,
wlifv.effective_start_date,
wlifv.effective_end_date,
hpi.content_type_id,
hpi.date_from,
hpi.date_to,
hpi.content_item_id competency_id,
hctt.content_type_name,
hcit.name content_item_name
from HRT_RELATION_CONFIG_B hrcb
,HRT_PROFILE_RELATIONS hpr
,HRT_CONTENT_TYPES_B hctb
,HRT_CONTENT_TYPES_TL hctt
,HRT_CONTENT_ITEMS_TL hcit
,HRT_PROFILE_ITEMS hpi
,HRT_PROFILES_B hpb
,WLF_LEARNING_ITEMS_F_VL wlifv
where hrcb.key_table_name = 'WLF_LEARNING_ITEMS_F_VL'
and hrcb.relation_code = 'LEARNING_ITEM'
and hrcb.relation_id = hpr.relation_id
and hpi.profile_id = hpr.profile_id
and hpi.content_type_id = hctb.content_type_id
and hctt.content_type_id = hctb.content_type_id
and hctb.context_name = 'COMPETENCY'
and hpi.profile_id = hpb.profile_id
and hpr.object_id = wlifv.learning_item_id
and hpb.profile_usage_code = 'L'
and hpi.content_item_id = hcit.content_item_id
and TRUNC(SYSDATE) BETWEEN wlifv.effective_start_date and wlifv.effective_end_date
and hctt.language = 'US'
and hcit.language = 'US'
ORDER BY 1
Lookups are commonly used across modules in SaaS. Sometimes, the number of lookups is so much that it takes lot of time and effort to create them manually in the application. Oracle SaaS supports bulk upload of both lookup types and lookup codes.
In this post, we will see how to make use of file based loader to load lookup types.
Prepare the lookup code file as given below:
LookupType|Meaning|Description|ModuleKey|ModuleType TXX_MASS_UPLOAD|Mass Upload Lookup Definition|Test Lookup created for demo purpose|HcmCommonHrCore|LBA TXX_MASS_UPLOAD_A|Mass Upload Lookup Def – A|Test Lookup created for demo purpose-A|HcmCommonHrCore|LBA
Out of above listed attributes, only the description is optional.
Module Key and module type both are required parameters. To know what value shall be passed, please check the below post:
Oracle HCM makes use of extended lookups feature to support dependent lookup values. For example, while creating a VISA or Work Permit record for a Person for Singapore, the Category field is dependent upon Type of the pass chosen. The values of Category field are derived from Extended lookup.
Navigate to Setup and Maintenance -> Manage Extended Lookup codes -> Visa Permit Type
Now, let us take an example where we need to load 2 values for categories based on the lookup code S Pass.
Prepare the HDL file in below format:
METADATA|ExtendedLookupCode|ExtendedLookupCodeId|LookupType|LookupCode|LegislationCode|ExtendedLookupCode|ExtendedLookupCodeName|SourceSystemOwner|SourceSystemId MERGE|ExtendedLookupCode||PER_VISA_PERMIT_TYPE|SG_SP|SG|TEST_SP1|S Pass Holder – Test 1|HRC_SQLLOADER|TEST_1 MERGE|ExtendedLookupCode||PER_VISA_PERMIT_TYPE|SG_SP|SG|TEST_SP2|S Pass Holder – Test 2|HRC_SQLLOADER|TEST_2
zip the file and upload using HCM Data Loader from Data Exchange.
On successful load, the new values can be verified from either of following two places on the UI:
While defining Common Lookups or value sets, you need to provide module value. Each module has an associated module type, module key and product code associated with it. For example:
These details are stored in backed in a table – FND_APPL_TAXONOMY.
Use the below query to find module type, module key etc for a module:
select fat.MODULE_NAME
,fat.MODULE_TYPE
,fat.MODULE_KEY
,fat.PRODUCT_CODE
from FND_APPL_TAXONOMY fat
Lookups are used commonly to meet different requirements. Many a times, lookup values easily go past hundred values, in such case adding the values one by one into the lookup is very tedious and error prone job.
There is no HDL support to bulk upload the lookup values. However, a file based solution is available which is easy to use and quick.
We have already discussed on how to bulk upload lookup types in below post:
Follow the below steps to mass upload lookup values:
Create a custom lookup from UI:
[N] – Setup and Maintenance -> Search -> Manage Common Lookups
2. Click on Add New (+) under search results:
3. Provide the details and Click on Save:
4. Prepare the lookup values file in below format:
LookupType|LookupCode|DisplaySequence|EnabledFlag|StartDateActive|EndDateActive|Meaning|Description|Tag XXX_MASS_UPLOAD|MASS_01|1|Y|15/12/2001||Mass Upload Value 1|Mass Upload Value 1 Description|+GB
Below mentioned attributes in the above file are Mandatory:
-> LookupType
-> LookupCode
-> EnabledFlag
-> Meaning
Except these all other fields are optional.
Date Format for StartDateActive and EndDateActive attributes is DD/M/RRRR.
File should be pipe (|) delimited.
Save the file as csv.
5. Once the file is ready, navigate to – Tools -> File Import and Export
6. Click on Add (+) and choose your file:
Select account as :-> setup/functionalSetupManger/import
Click on Save and Close.
7. Navigate to Manage Common Lookups. Under Search Results click on Action and Import:
8. Select the account and give the file name as given in step 6 and Click on Upload button:
9. Monitor the import progress:
10. Once the import is complete, verify the uploaded values:
11. Results can be verified from Import file log as well:
Both Lookup types and lookup codes can be loaded in one shot as well. Prepare both the files simultaneously and follow the same steps as given above.
There is a common requirement to mask salary data post P2T refreshes. This should be done in order to hide the actual salaries information as salary is a very sensitive information.
Use the below query to generate data in HDL format in a test environment immediately after P2T refresh. The below query generate a random salary amount. Save the downloaded data in .dat file format and upload it back to the instance.
Select 'METADATA|Salary|AssignmentNumber|SalaryAmount|DateFrom|DateTo|SalaryBasisId|SalaryId' Header, 1 data_flow_order
from dual
UNION
SELECT 'MERGE|Salary'||'|'||
paam.assignment_number||'|'||
round(DBMS_RANDOM.VALUE (1,15000) , 2)||'|'||
TO_CHAR(cs.date_from,'RRRR/MM/DD', 'nls_date_language=American')||'|'||
TO_CHAR(cs.date_to,'RRRR/MM/DD', 'nls_date_language=American')||'|'||
cs.salary_basis_id||'|'||
cs.salary_id data_row,
2 data_flow_order
FROM cmp_salary cs,
per_all_assignments_m paam
WHERE cs.assignment_id= paam.assignment_id
AND trunc(sysdate) between paam.effective_start_date AND paam.effective_end_date
AND paam.assignment_type in ('E', 'C', 'P')
AND paam.assignment_number ='E788880'
ORDER BY data_flow_order