- Preface
- Installation Prerequisites
- Installing the Master for Windows
- Installing the Master for Unix
- Installing Client Manager
- Installing the Java Client
- Installing Fault Tolerance
- Installing the Agent
- Installing Adapters
- Basic Configuration
- Configuring SSL Messaging
- Defining Users
- Upgrading Components
- Troubleshooting TES
- Appendix A
- Appendix B
Installing Adapters
This chapter describes prerequisites and installation information for the TES adapters that require manual setup.
Informatica Adapter
The TES Adapter for Informatica integrates with PowerCenter using Informatica’s Load Manager SDK (a set of application programming interfaces/APIs that allows interaction with the PowerCenter Server for workflow management). Via this programming interface, the Informatica Adapter communicates with the Load Manager component of the PowerCenter to run and monitor workflows. To provide for user access to Repository data such as Folder, Workflow and Workflow Task definitions, the Informatica Adapter also requires a database connection to the PowerCenter Repository Database. Database connectivity is provided via Java Database Connectivity (JDBC) programming interface.
Installing the Informatica Adapter
To install the Informatica adapter:
a. Click the Start button and then select Programs>TIDAL Software>Scheduler>Master>Service Control Manager.
b. In the Service list, verify that the master is displayed and click Stop to stop the master.
a. Stop the master by entering tesm stop.
b. Verify the master is stopped by entering tesm status.
Step 2 Copy the.pkg file into the config folder located in the master installation directory (there should already be a master.props file there).
a. a For Windows, click Start in the Service Control Manager.
b. b For Unix, restart the master by entering tesm start.
The Master will deploy the.pkg file and will move it from the config folder to the Services folder. An Adapter GUID directory is also created under the Services folder.
For Windows: C:\Program Files\TIDAL\Scheduler\master\services\{7640B420-5530-11DE-8812-7B8656D89593}
For Unix: /opt/TIDAL/Scheduler/master/services/ {7640B420-5530-11DE-8812-7B8656D89593}
Step 4 Restart the Enterprise Scheduler Client by clicking the Windows Start button and selecting Programs>TIDAL Software>Client>Client. When the Client connects, it will download the new package.
The next several steps involve configuring the system for use with the Informatica Adapter. Once the.pkg has been deployed you must stop the Master service and restart it after completing the following steps.
Configuring the Informatica Adapter
To Install and configure the Informatica Libraries:

Note For Unix, add the following entries in user's profile located in the user's home directory. For example:.profile or.bash_profile (Linux).
You will need to source the profile after applying all profile updates. For example,.~/.profile.
Once the following steps are performed, this will require a restart of the master for the configurations to take affect.
Step 1 Extract the library from the infalib archive to the master machine
a. Create a directory under the master services directory called infa.
b. Windows Example: C:\Program Files\TIDAL\master\services\infa\
c. Unix Example: /opt/tidal/master/services/infa
d. Extract the archive to this location. The archive distribution contains directories: lib and locale. The system will be configured to refer to these locations in the next steps.
Step 2 Configure the System Path to include the Informatica Library Path (i.e. lib directory.)
Windows Example: C:\Program Files\TIDAL\master\services\infa\lib
For Windows, include the library path in the "Path" Environment Variable.
Unix Example: /opt/master/services/infa/lib or master/services/infa/lib
For Solaris/Linux, include the library path in LD_LIBRARY_PATH.
For AIX, include the path in LIBPATH. For 64-bit also include LD_LIBRARY_PATH.
For HPUX, include the path for SHLIB_PATH.
Step 3 Create or update the INFA_DOMAINS_FILE Environment Variable to the location of the Informatica domains.infa file for the PowerCenter configuration.
This requires that the domains.infa file be local to the Master machine; copy it from your PowerCenter installation as needed. Put this file in the infa directory created in step 1a).

Note To configure connections to multiple PowerCenter servers, modify the local domains.infa file that was copied to the Master machine. Add values for the vector xml tag corresponding to each Server that will be configured as an Informatica Adapter.
The following example includes server information to two PowerCenter servers, one to Dev and another for Prod. These are referred to as dev-infa and prod-infa, respectively in the sample domains.infa file.
<Portals xmlns:common="http://www.informatica.com/pcsf/common" xmlns:usermanagement="http://www.informatica.com/pcsf/usermanagement" xmlns:domainservice="http://www.informatica.com/pcsf/domainservice" xmlns:logservice="http://www.informatica.com/pcsf/logservice" xmlns:domainbackup="http://www.informatica.com/pcsf/domainbackup" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:metadata="http://www.informatica.com/pcsf/metadata" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:domainconfigservice="http://www.informatica.com/pcsf/domainconfigservice" xmlns:alertservice="http://www.informatica.com/pcsf/alertservice" xmlns:licenseusage="http://www.informatica.com/pcsf/licenseusage" xmlns:webserviceshub="http://www.informatica.com/pcsf/webserviceshub" xsi:type="common:PCSFVector" objVersion="1.1.19">
<vector xsi:type="domainservice:Portals" objVersion="1.1.19">
<domainName>Domain_dev-infa</domainName>
<address xsi:type="metadata:NodeRef" objVersion="1.1.19">
<vector xsi:type="domainservice:Portals" objVersion="1.1.19">
<domainName>Domain_prod-infa</domainName>
<address xsi:type="metadata:NodeRef" objVersion="1.1.19">
INFA_DOMAINS_FILE=C:\ Program Files\ TIDAL\master\services\infa\domains.infa
Export INFA_DOMAINS_FILE=/opt/TIDAL/master/services/infa/domains.infa
Step 4 Configure the Locale Path for the Informatica Library by setting the TDLINFA_LOCALE service.props value of the Informatica Adapter. In the config directory located under the Adapter’s GUID directory, create or update the service.props file, create both the directory and file if it does not yet exist. Include an entry for TDLINFA_LOCALE that points to the Load Manager Library locale directory.
Windows Example:
C:\Program Files\TIDAL\Scheduler\master\services\{7640B420-5530-11DE-8812-7B8656D89593}\config\service.props
Unix Example:
/opt/tidal/master/services/{7640B420-5530-11DE-8812-7B8656D89593}/config/service.props
TDLINFA_LOCALE=C:\\Program Files\\TIDAL\\Scheduler\\master\\services\\infalib\\locale
TDLINFA_LOCALE=/opt/tidal/master/services/infa/locale
Step 5 You will need access to the Database JDBC Drivers for connectivity to the PowerCenter Repository database. Obtain the JDBC jar files from the vendor as needed and copy the corresponding .jar files to the services lib directory.
Windows Example:
C:\Program Files\TIDAL\Scheduler\master\services\{7640B420-5530-11DE-8812-7B8656D89593}\lib
Unix Example:
/opt/tidal/master/services/{7640B420-5530-11DE-8812-7B8656D89593}/lib
Step 6 Reboot the Master machine on Windows as needed. Source the profile file as needed on Unix.
SAP Adapter
While the SAP adapter software is already installed as part of a normal installation of TES, you must download and install the Java connector software provided by SAP, called the JCO 3.0 component. SAP JCO 3.0 is necessary for a Java application (like the Enterprise Scheduler) to work with SAP.
Installing SAP JCO
The master requires Java connector (JCO) software from SAP. SAP’s JCO middleware allows a Java application to communicate with an SAP system. Each operating system requires its own version of the JCO that can be downloaded from SAP.
Step 1 In your web browser, go to the following URL: http://service.sap.com/patches .
A Client Authentication dialog displays to request an authentication certificate.
a. If you have such a certificate, select it and click OK.
b. If you do not have a certificate, click OK to display the Enter Network Password dialog.
Step 2 Enter the user name and password supplied by SAP into the respective text fields and click OK.
The SAP Support Packages and Patches Web page displays.
Step 3 Navigate to 3.x SAP Java Connector Download Page.
Step 4 Various operating systems are listed. Click on the appropriate operating system to access its archive file for downloading. Follow the instructions for installing the JCO that are included in the archive file.
Step 5 In the initial setup of SAP JCO 3.0, two files from the SAP JCO.zip file are necessary:
After installing the JCO, add the {sapjco-install-path}/sapjco3.jar} to your CLASSPATH environment variable, and then specify the path to the location when you have
JCO 3.x libraries installed. You may need to reboot your system.
OS400 Adapter
To operate properly, the OS/400 adapter from Enterprise Scheduler has the following prerequisites.
Minimum Software Requirements
The minimum software releases for the Scheduler OS/400 adapter implementation is OS/400 version v5R2M0.
See your Tidal Enterprise Scheduler Reference Guide for a full list of requirements.
There are different authorities required depending on whether the user is submitting the job or having the job submitted for them.
The following services must be running on the OS/400 machine:
A user defined on the OS/400 manages the connection to the OS/400 and submits jobs to run under different users. This user is strongly recommended to have QSECOFR authorities and be able to issue the SBMJOB command. This user must have:
- *USE authority to the other user’s profile
- *USE authority to the command specified in the Command parameter and *EXECUTE authority to the library containing that command
- *READ authority to the job description (JOBD) and *EXECUTE authority to the library containing that job description
- *USE authority to the job queue (JOBQ) and *EXECUTE authority to the library containing that job queue
- *USE and *ADD authority to the message queue (MSGQ) and *EXECUTE authority to the library containing that message queue
- *USE authority to the sort sequence table (SRTSEQ) and *EXECUTE authority to the library containing that sort sequence table
- *EXECUTE authority to all auxiliary storage pool (ASP) device descriptions in the initial ASP group (INLASPGRP)
The user that the job is being submitted for (as specified in the User text box on the Page 4 tab) must have the following authorities:
- *USE authority to the job description (JOBD)
- *READ authority to the output queue (OUTQ) and *EXECUTE authority to the library containing that output queue
- *USE authority to all auxiliary storage pool (ASP) device descriptions in the initial ASP group (INLASPGRP)
- *USE authority to the library specified for the current library (CURLIB) parameter
- *USE authority to all the libraries specified for the initial library list (INLLIBL) parameter
OS/400 Configuration
While the OS/400 adapter software is already installed as part of a normal installation of Scheduler, you must perform the following steps to license and configure the adapter before you can run OS/400 jobs:
- License the connection(s) to the AS/400 machine. You cannot define an OS/400 connection until you have applied the OS/400 license from TIDAL Software. For details, refer to the Cisco Tidal Enterprise Scheduler 6.2 Online Help.
- Define an OS/400 connection so the master can communicate with a AS/400 machine. For details, refer to the Cisco Tidal Enterprise Scheduler 6.2 Online Help.
- Define an OS/400 user as a runtime user in TES and add this user to other users’ runtime users list.
zOS Adapter
The Gateway component of the z/OS adapter provides the following features:
- Installs without an IPL
- Full sysplex support
- Monitors SMF records
- Modification of parameters and processes without restarting
- SMF processing is independent of other concurrent SMF processes and prior to any process that may alter SMF data
- Fault tolerance
- Supports OS/390 and z/OS
The Gateway uses three Started Tasks:
Installing the zOS Gateway
The Gateway sits between the Scheduler master and the Systems Management Facilities (SMF) component on z/OS. The Gateway component tracks job dependencies on batch jobs that execute on z/OS. These job dependencies can be tracked not only by job but by individual job steps that comprise a job. The Gateway can run without the SDSF component of z/OS. If the network connection between Scheduler and the Gateway is broken, the Gateway continues to process the SMF job data and archive all job information so that it can be relayed to the master whenever the connection is restored.
Step 1 Insert the installation DVD into the DVD-ROM drive of a Client Manager machine.
Step 2 On the installation DVD, locate in the zOS Agent\zOS Gateway directory the following three files:
This will look like the following screen:
Directory of <DVD-ROM>:\zOSAgent\zos Gateway
05/09/2002 07:39p 5,553,600 unload.bin
04/01/2002 11:15a 5,600 unlcntl.bin
04/01/2002 11:15a 4, 320 unlparm.bin
3 File(s) 5,563,520 bytes
0 Dir(s) 396,230,656 bytes free
Preallocate the size of these three data sets or ensure your site's defaults are large enough that you do not receive a B37 abend when FTPing the files. (All three data sets are FB-80-3120.)
hlq.UNLOAD.BIN 120 tracks
hlq.UNLCNTL.BIN 1 track
hlq.UNLPARM.BIN 1 track
Step 3 Select the three files and FTP them to the z/OS server and desired HLQ directory, using the binary transfer mode. (By default, this directory is usually the same as the user name you connect with.)
The following is an example of FTPing the files:
ftp <host name>
Connected to <host name>.
220-FTPD1 IBM FTP CS V2R10 at STRONG.COM, 17:41:44 on 2002-05-10.
220 Connection will close if idle for more than 5 minutes.
User <host name>: <user name>
331 Send password please.
Password:
230 <user name> is logged on. Working directory is <"directory">.
ftp> bin
200 Representation type is Image
ftp> put unload.bin
200 Port request OK.
125 Storing data set IBMUSER.UNLOAD.BIN
250 Transfer completed successfully.
ftp: 1599840 bytes sent in 2.26Seconds 706.96Kbytes/sec.
ftp> put unlcntl.bin
200 Port request OK.
125 Storing data set IBMUSER.UNLCNTL.BIN
250 Transfer completed successfully.
ftp: 473200 bytes sent in 0.45Seconds 1049.22Kbytes/sec.
ftp> put unlparm.bin
200 Port request OK.
125 Storing data set IBMUSER.UNLPARM.BIN
250 Transfer completed successfully.
ftp: 27200 bytes sent in 0.00Seconds 27200000.00Kbytes/sec.
ftp> quit
221 Quit command received. Goodbye.
Step 4 Once the files have been FTPed to data sets, you must unload and create the library files. Use the TSO RECEIVE command on each data set to create partitioned data sets (PDS). These three files will create the following libraries:
Start with the UNLCNTL file first because it contains the JCL, Started Tasks, PROCS and miscellaneous CLISTS needed to create the other hlq.JOBDATA VSAM data set and sample JOBS.
This step might look like the following example:
tso receive indsn('hlq.unload.bin')
INMR901I Dataset hlq.LOADLIB from TIDAL on PLUTO,
INMR906A Enter restore parameters or 'DELETE' or 'END' +,
response by pressing enter or use the dsn('dataset.net')
If you get the following message respond with an R to overwrite the members.
, IEBCOPY MESSAGES AND CONTROL STATEMENS PAGE 1,
,IEB1135I IEBCOPY FMID HDZ11F0 SERVICE LEVEL NONE DATED 20000815 DFSMS 02
10.00 OS/390 02.10.00 HBB7703 CPU 1247,
,IEB1035I IBMUSER ISPFPROC DBSPROC 13:26:13 FRI 10 MAY 2002 PARM='',
, COPY INDD=((SYS00018,R)),OUTDD=SYS00016,
,IEB1013I COPYING FROM PDSU INDD=SYS00018 VOL=OS39M1 DSN=SYS02130.T132612.RA00
.IBMUSER.R0100505,
,IEB1014I TO PDS OUTDD=SYS00016 VOL=OS39M1 DSN=IBMUSER
LOADLIB
,IEB167I FOLLOWING MEMBER(S) LOADED FROM INPUT DATA SET REFERENCED BY SYS00018,
,IEB154I DEINIT HAS BEEN SUCCESSFULLY LOADED,
...
IEB154I TVAVTOC1 HAS BEEN SUCCESSFULLY LOADED,
IEB1098I 156 OF 156 MEMBERS LOADED FROM INPUT DATA SET REFERENCED BY SYS00018,
IEB144I THERE ARE 45 UNUSED TRACKS IN OUTPUT DATA SET REFERENCED BY SYS00016,
IEB149I THERE ARE 5 UNUSED DIRECTORY BLOCKS IN OUTPUT DIRECTORY,
IEB147I END OF JOB - 0 WAS HIGHEST SEVERITY CODE,
INMR001I Restore successful to dataset '<User>.LOADLIB',
***,R
Oracle Applications Adapter
The TES adapter for Oracle Applications integrates Oracle Applications into TES using a concurrent manager bridge.
The Oracle Applications Adapter from TES uses Net*8 (SQL*NET) to connect directly to Oracle databases when accessing Oracle Applications.
Oracle databases compile and store procedures and functions in units called packages. The Oracle Applications Adapter uses Oracle’s packages and other packages customized by TES in combination with SQL statements to integrate the TES job scheduler with the Concurrent Manager process that monitors and controls the Oracle Applications job. The Concurrent Manager monitors and responds to the data stored within the Oracle database using the packages available to it.
The customized packages supplied by TES must be compiled in Oracle Applications before a connection between TES and Oracle Applications can be established. An error occurs in TES if you try to establish a connection to an Oracle Applications instance before the proper customized packages are installed on the designated Oracle Applications instance.
Any inserting and updating to the standard tables of Oracle Applications is done using standard APIs present in the Oracle Applications database. Nothing is deleted from the standard Oracle Applications database just as no database schema objects are modified.
Minimum Software Requirements
The minimum software requirements for the Oracle Applications Adapter for TES are:
Installing and Configuring the Adapter
There are two components to the Oracle Applications adapter. One component is the Oracle Applications adapter itself while the other part is a bridge component that provides a link between the adapter and the Oracle Applications program. The Oracle Applications adapter is part of the normal TES installation and does not require a separate installation. However, the Bridge component does require installation and the procedure to install it is described in the following section.
Completing the Bridge Prerequisites
The Oracle Applications Bridge is comprised of various PL/SQL stored procedures and forms used to pass job parameters to the Oracle database. The Bridge component of the Oracle Applications adapter is not part of the regular TES installation and requires a separate installation procedure.
The following prerequisites must be completed before installing the Oracle Applications Bridge:
- The user must be logged on to Windows/Unix as the application owner (usually applmgr).
- Run the application environment file (usually Appsora.env under $APPL_TOP) in the current shell.
- Grant the execute privilege on sys.dbms_obfuscation_toolkit file to the APPLSYSPUB user. This package is used by the Bridge to encrypt and decrypt the data. To grant this privilege, connect to the database as system (or SYSDBA) and from the SQL prompt, enter:
SQL>grant execute on sys.dbms_obfuscation_toolkit to applsyspub;
- Create tablespace for the Table and Index spaces before starting installation. To configure the tablespaces to autoextend:
SQL>CREATE TABLESPACE sabdg_data DATAFILE '/do1/oracle/testdata/sabdg_data.dbf' SIZE 100M AUTOEXTEND ON NEXT 20M MAXSIZE UNLIMITED;
SQL>CREATE TABLESPACE sabdg_index DATAFILE '/do1/oracle/testdata/sabdg_idx.dbf' SIZE 50M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED;
- While no existing $APPL_TOP objects/files are modified when installing the Bridge, three new objects/files that start with SABDG are created.

Note The database schema names used above are only examples. You can use your own names for the database schemas.
Installing the Bridge for 11i or R12
The batch file that installs the Bridge requires the following parameters:
-
APPS user
(or equivalent) – The equivalent Apps user in the Oracle Applications program. -
APPS password
– The password of the Apps user used to access Oracle Applications. -
Data tablespace
– The name of the data tablespace (sabdg_data). -
Index tablespace
– The name of the index tablespace (sabdg_index). -
TNS name
– The TNS string to connect to the database (Windows only) -
Temp tablespace
– The temporary tablespace for the user SABDG. -
System password
– The database user system password. This is required for when the installation process creates the SABDG user in the database to own tables, sequences and indexes.
Use these parameters when running the batch file to install the Bridge. The installation and upgrade procedures for both Windows and Unix forms servers are described next.
Initial Installation–Windows Forms Server
Copy the 11i or R12 Bridge files from the \OraAppsBridge\Windows directory in the installation DVD-ROM to a temporary directory on the forms server. The temporary directory must be on the same drive as the APPL_TOP.
Step 1 On the command line, using the listed parameters, enter:
install_11i <APPS User> <APPS Password> <Data Tablespace> <Index Tablespace> <TNS Alias Name> <Temp Tablespace> <System Password>
install_R12 <APPS User> <APPS Password> <Data Tablespace> <Index Tablespace> <TNS Alias Name> <Temp Tablespace> <System Password>
Step 1 On the command line, using the listed parameters, enter:
upgrade_11i <APPS User> <APPS Password> <Data Tablespace> <Index Tablespace> <TNS Alias Name> <Temp Tablespace> <System Password>
upgrade_R12i <APPS User> <APPS Password> <Data Tablespace> <Index Tablespace> <TNS Alias Name> <Temp Tablespace> <System Password>
Initial Installation–Unix Forms Server
Copy the 11i or R12 Bridge TdlOraAppsBdg.tar file from the /OraAppsBridge/Unix directory in the installation DVD-ROM to a temporary directory on the forms server.
Step 1 Extract files from TdlOraAppsBdg.tar file:
Step 2 From the temporary directory where you copied the Bridge files, at the cursor, enter:
Step 3 At the cursor, using the listed parameters, enter:
sh./install_11i.sh <APPS User> <APPS Password> <Data Tablespace> <Index Tablespace> <Temp Tablespace> <System Password>
sh./install_R12.sh <APPS User> <APPS Password> <Data Tablespace> <Index Tablespace> <Temp Tablespace> <System Password>
Upgrading the 11i or R12 Bridge–Unix
Step 1 Extract files from TdlOraAppsBdg.tar file
Step 2 From the temporary directory where you copied the Bridge files, at the cursor, enter:
Step 3 At the cursor, using the listed parameters, enter:
sh./upgrade_11i.sh <APPS User> <APPS Password> <Data Tablespace> <Index Tablespace> <Temp Tablespace> <System Password>
sh./upgrade_R12.sh <APPS User> <APPS Password> <Data Tablespace> <Index Tablespace> <Temp Tablespace> <System Password>

Note If you have a multi-tier architecture off Oracle Apps containing multiple form servers, then the Bridge for Oracle Apps must be installed on only one of the forms server and upgraded on the rest of the form servers to ensure distribution of the Bridge forms.
Verifying Successful Installation/Upgrade
Installation and upgrade procedures can be verified by checking a log file that is created when the Bridge is installed. This log file, called Verify_post.log, is created in the same directory where the Bridge was installed.
Open the Verify_post.log file.
There are three values displayed in the log file:
The TOT and VAL values should read 36.
If the values displayed in the log file are the proper values, then installation/upgrade was successful. Any deviation from these values indicates that the installation/upgrade was unsuccessful.
Uninstalling the Bridge
To uninstall the Bridge component of the Oracle Applications adapter, you must delete all of the Bridge objects, the Bridge owner and all of the forms on the forms server. The procedures to delete the Bridge owner and its objects are the same for both the Windows and Unix platforms but the procedures for deleting forms from the forms server differ for each platform.
To delete the Bridge objects (Windows and Unix):
Step 1 Login as Apps to the apps database.
Step 2 Run the sabdg_drobj.sql script that is found in the \OraAppsBridge\Windows\sabdg_obj.sql file in the Windows directory on the installation DVD-ROM.
To delete the Bridge owner (Windows and Unix):
Step 1 Login as system to the apps database.
Step 2 Drop user sabdg cascade.
To delete all forms on the forms server:
rm %AU_TOP%\forms\US\SABDG\*.fmb
rm -r %FND_TOP%\forms\US\sabdg
MapReduce Adapter
Hadoop MapReduce is a software framework for writing applications that process large amounts of data (multi-terabyte data-sets) in-parallel on large clusters (up to thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.
A Cisco Tidal MapReduce Adapter job divides the input data-set into independent chunks that are processed by the map tasks in parallel. The framework sorts the map’s outputs, which are then input to the reduce tasks. Typically, both the input and output of the job are stored in a file-system. The framework schedules tasks, monitors them, and re-executes failed tasks.
Minimally, applications specify the input/output locations and supply map and reduce functions via implementations of appropriate interfaces and/or abstract-classes. These, and other job parameters, comprise the job configuration. The Hadoop job client then submits the job (jar/executable) and configuration to the JobTracker.
The client then assumes the following responsibilities:
- Distributes the software/configuration to the slaves
- Schedules and monitors tasks
- Provides status and diagnostic information to the job -client
The MapReduce Adapter serves as the job client to automate the execution of MapReduce jobs as part of a Tidal Enterprise Scheduler (TES) managed process. The Adapter uses the Apache Hadoop API to submit and monitor MapReduce jobs with full scheduling capabilities and parameter support. Alternatively, the Adapter may be configured to connect to a Cloudera Hadoop or MapR distribution. As a platform independent solution, the Adapter can run on any platform where the TES master runs.
Installing the MapReduce Adapter
The MapReduce Adapter software is not installed as part of a standard installation of TES, and you must install, and configure the adapter before you can schedule and run MapReduce jobs.
To install and configure the MapReduce adapter:
Step 2 Delete the directory {D9AC03 D5-41ED-4B1E-8A45-B2EC8BDE3EA0} and Mapreduceservice.pkg under the directory /TIDAL/Scheduler/Master/services.
Step 3 Place the new mapreduceservice.pkg file at /TIDAL/Scheduler/Master/config.
The /TIDAL/Scheduler/Master/services/{D9AC03D5-41ED-4B1E-8A45-B2EC8BDE3EA0} directory is created.
Step 5 In the {D9AC03D5-41ED-4B1E-8A45-B2EC8BDE3EA0} directory, create a subdirectory named Config.
Step 6 Create the service.props file in the Config directory.
Step 7 (For Apache 1.1.2 distribution only) Add the following lines in the service.props file:
CLASSPATH=C:\\Program Files\\TIDAL\\Scheduler\\Master\\services\\{D9AC03D5-41ED-4B1E-8A45-B2EC8BDE3EA0}\\lib\\*;${CLASSPATH}
Step 8 (For Cloudera 3 distribution only) Add the following line in the service.props file:
Step 9 (For Cloudera 4 distribution only) Add the following lines in service.props file:
CLASSPATH=C:\\Program Files\\TIDAL\\Scheduler\\Master\\services\\{D9AC03D5-41ED-4B1E-8A45-B2EC8BDE3EA0}\\lib\\*;${CLASSPATH}
Step 10 (For MapR Distribution only) Install MapR client in the TES master machine, and add the following lines in the service.props file
JVMARGS=-Djava.library.path=C:\\opt\\mapr\\hadoop\\hadoop-0.20.2\\lib\\native\\Windows_7-amd64-64
CLASSPATH=C:\\opt\\mapr\\hadoop\\hadoop-0.20.2\\lib\\*;${CLASSPATH}
Hive Adapter
The Cisco Tidal Enterprise Scheduler Hive Adapter provides the automation of HiveQL commands as part of the cross-platform process organization between Tidal Enterprise Scheduler (TES) and the TES Hadoop Cluster.
The Hive Adapter allows you to access and manage data stored in the Hadoop Distributed File System (HDFS™) using Hive's query language, HiveQL. HiveQL syntax is similar to SQL standard syntax.
The Have Adapter, in conjunction with TES, can be used to define, launch, control, and monitor HiveQL commands submitted to Hive via JDBC on a scheduled basis. The Adapter integrates seamlessly in an enterprise scheduling environment.
The Hive adapter includes the following features:
- Connection management to monitor system status with a live connection to the Hive Server via JDBC
- Hive job and event management includes the following:
– Scheduling and monitoring of HiveQL commands from a centralized work console with Enterprise Scheduler
– Dynamic runtime overrides for parameters and values passed to the HiveQL command
– Output-formatting options to control the results, including table, XML, and CSV
– Defined dependencies and events with Enterprise Scheduler for scheduling control
– Runtime MapReduce parameters overrides if the HiveQL command results in a MapReduce job.
Installing the Hive Adapter
The Hive adapter software is not installed as part of a standard installation of TES, and you must install, and configure the adapter before you can schedule and run Hive jobs.
To install and configure the Hive adapter:
Step 2 Delete the {207463B0-179B-41A7-AD82-725A0497BF42} directory and hiveservice.pkg in the /TIDAL/Scheduler/Master/services directory.
Step 3 Place the new hiveservice.pkg file at TIDAL/Scheduler/Master/config.
The /TIDAL/Scheduler/Master/services/{207463B0-179B-41A7-AD82-725A0497BF42} directory is created.
Step 5 In the {207463B0-179B-41A7-AD82-725A0497BF42} directory, create a Config subdirectory.
Step 6 Create the service.props file in the Config directory.
Step 7 In the service.props file, add the jarlib properties as follows:
a. For Apache 1.1.2, add: jarlib=apache1.1.2
b. For cloudera 4, add: jarlib=cdh4
c. For MapR add: jarlib=apache1.1.2
Sqoop Adapter
The Cisco Tidal Enterprise Scheduler (TES) Sqoop Adapter provides easy import and export of data from structured data stores such as relational databases and enterprise data warehouses. Sqoop is a tool designed to transfer data between Hadoop and relational databases. You can use Sqoop to import data from a relational database management system (RDBMS) into the Hadoop Distributed File System (HDFS), transform the data in Hadoop MapReduce, and then export the data back into an RDBMS. Sqoop adapter allows users to automate the tasks carried out by Sqoop.
The import is performed in two steps as depicted in figure below. In the first Step Sqoop introspects the database to gather the necessary metadata for the data being imported. The second step is a map-only Hadoop job that Sqoop submits to the cluster. It is this job that does the actual data transfer using the metadata captured in the previous step.
Installing the Sqoop Adapter
The Sqoop adapter software is not installed as part of a standard installation of TES, and you must install, and configure the adapter before you can schedule and run Sqoop jobs.
To install and configure the Sqoop adapter:
Step 2 Delete the {722A6A78-7C2C-4D8B-AA07-B0D9CED6C55A} directory and the sqoopservice.pkg file in the /TIDAL/Scheduler/Master/services directory.
Step 3 Place the new sqoopservice.pkg file at /TIDAL/Scheduler/Master/config.
The /TIDAL/Scheduler/Master/services/{722A6A78-7C2C-4D8B-AA07-B0D9CED6C55A} directory is created.
Step 5 In the {722A6A78-7C2C-4D8B-AA07-B0D9CED6C55A} directory, create a Config subdirectory.
Step 6 Create the service.props file in the Config directory.
Step 7 (For Apache 1.1.2 distribution only) Add the following lines in the service.props file:
CLASSPATH=C:\\Program Files\\TIDAL\\Scheduler\\Master\\services\\{722A6A78-7C2C-4D8B-AA07-B0D9CED6C55A}\\lib\\*;${CLASSPATH}
Step 8 (For Cloudera 3 distribution only) Add the following line in the service.props file:
Step 9 (For Cloudera 4 distribution only) Add the following lines in the service.props file:
CLASSPATH=C:\\Program Files\\TIDAL\\Scheduler\\Master\\services\\{722A6A78-7C2C-4D8B-AA07-B0D9CED6C55A}\\lib\\*;${CLASSPATH}
Step 10 (For MapR Distribution only) Install MapR client in TES master machine, and add the following lines in the service.props file
JVMARGS=-Djava.library.path=C:\\opt\\mapr\\hadoop\\hadoop-0.20.2\\lib\\native\\Windows_7-amd64-64
CLASSPATH=C:\\opt\\mapr\\hadoop\\hadoop-0.20.2\\lib\\*;${CLASSPATH}

Note Make sure jdk is installed in the machine and set the JAVA_HOME, and add JAVA_HOME/bin to the system PATH. The path to the database drivers and the sqoop jars must be added to the HADOOP_CLASSPATH