Note: You have to arrange your own Mainframe ID. Mainframe ID will not be provided with the course.
Welcome to the most elaborate and detailed course about Mainframe JOB CONTROL LANGUAGE (JCL) on the whole internet.
Note: You have to arrange your own Mainframe ID. Mainframe ID will not be provided with the course.
Welcome to the most elaborate and detailed course about Mainframe JOB CONTROL LANGUAGE (JCL) on the whole internet.
This course has been the bestseller and top reviewed course about JCL on Udemy. I have exclusively built this course for UDEMY platform.
You will get:
JCL from scratch to advanced level.
Lectures in whiteboard animation format.
50+ JCL programs along with code. You can download these programs as well.
Monthly updates on new programs added to the course.
Professionally made subtitles(transcripts) in English. There are translated subtitles in 20 languages as well.
English, Spanish, Portuguese, Simplified Chinese, Hindi, French, German, Dutch, Irish, Turkish, Vietnamese, Arabic, Bengali, Dutch, Hebrew, Italian, Japanese, Korean, Russian and Thai.
For this course I have made some whiteboard animation videos to make learning easy
The lectures which have whiteboard animation are: BATCH
Because of this, you can learn these topics easily.
I have also attached all the presentations in this course so you do not have to take notes while watching the lectures.
I also have attached all the JCL programs used in this course.
This is a practical JCL tutorial on a Mainframe computer. You have a basic understanding of Mainframe systems but not sure how to start coding? This course will teach you what you need to know. This is the first step for anyone who wants to start coding Programs on Mainframe systems so you can start a new career as a Mainframe Professional.
This Course is DIRECT and TO THE POINT.
Over 8+ hours of video content, with presentations and code used in the course.
I also UPDATE this course periodically to include even more Videos and Projects. New Resources and Articles are also added.
If you ever have any questions please feel free to message me directly and i will do my best to get back to you as soon as possible.
Build a foundation in Mainframe with this tutorial.
You will Learn:
What a JCL is.
The various Statements and how to code them in a JCL Program.
Running and diagnosing JCL programs on a Mainframe
Procedures used in JCL
Different utilities of JCL
Generation Data Group
Parameters used in JCL
Conditional processing in JCL
Various Utilities used in JCL
SORT
and much more.......
Mainframe is extensively used in large corporations which deal with huge amount of data every day. Over 70% of the global Fortune 500 companies use Mainframe to run their business for everything from cloud to mobile to big data and analytics. Now, there are 1.1 million customer transactions per second on Mainframes compared with just 40,000 searches on Google per second. IBM is the leading manufacturer of the Mainframe Systems.
Content and Overview Through this course, you’ll learn about the JOB Control Language on the Mainframe system.
Starting with an overview of the JCL, this course will take you through the types of statements used in JCL.
With these basics mastered, the course will take you through the different operation that you can do on a dataset, will introduce the GDG and its use in the Mainframe systems.
You will then learn SORT using IMDB dataset.
Upon completion you will be literate in JOB Control Language, understand how a JCL program is coded and executed on a Mainframe.
The PDF attached to this lecture contains the details of the course.
The PDF attached contains all the JCL programs used in the course.
A Batch process is an execution of a series of jobs without user interaction. Batch process is executed using Job Control Language. In this lecture i have explained Batch processing with an example using white-board animation video format.
JOB CONTROL LANGUAGE is used to submit jobs to the Mainframe systems. A JOB is one unit of work. Overview of the JCL is explained in this video. I have created a whiteboard animation for this lecture.
JCL has to be written inside a dataset having record length of 80. The syntax of the JCL is explained. The requirements to write a JCL program is also explained.
JOB statement is the first statement in any JCL program. JOB statement tells the name of the JOB, the location of the output, priority and other such information. The parameters given in JOB statement are applicable to all the steps in the JCL. The Job statement is explained in detail including all the parameters of the JOB statement.
This lecture explains all the parameters required in the JOB statement such as CLASS, MSGCLASS, MSGLEVEL, PRTY, NOTIFY and many more.
EXEC statement tells the system what programs/procedures to execute in the JCL. There can be multiple EXEC statements in one JCL. The EXEC statement and its parameters are explained in details.
This lecture also explains all the parameters required for the EXEC statement such as PGM, TIME, REGION, ACCT, PARM, PROC.
Dataset Definition statement specifies all the files that are to be used in the JCL. DD statements are given after the EXEC statement. There have to be an EXEC statement related to every DD statement. DD statement is explained in detail in this lecture.
This lecture also explains all the parameters required for the DD statement such as DISP, DSN, DCB, SPACE, UNIT and many more.
A lot of students never leave a review. Please leave a review of this course, so i can reach more people with my content.
JCL is coded from scratch. JOB, EXEC and DD statements are coded to sort a file. The job is then submitted to the system and the output file is analyzed to check whether the job ran fine or not.
Every JOB once submitted to the system goes through certain steps to get completed. The whole process of submitting the JOB to the ending of the JOB is explained in this lecture. It also shows the subsystems related to the JOB processing.
Every job step sets a return code based on the status of the execution. Return codes are explained in this video. Difference between return code and MAXCC is also explained.
System Display and Search Facility (SDSF) is a subsystem in z/OS. This subsystem is used to see the details related to any JOB that is running or has completed on the system. This lecture covers all the details such as the messages generated during the job execution, the output generated from the JOB and other such information during a job execution. It also covers some of the commands that can be executed in the SDSF subsystem.
In z/OS we have the SDSF option using which we can see the output of our job execution.
In MVS we have a similar option called OUTLIST. The option is available at 3.8. You can use this option to see the output of your JOB execution. The only major difference between z/OS and MVS is that the output in the OUTLIST is clubbed together in only one dataset. However in z/OS the output is separated in different datasets like JESMSG, JESJCL and SYSOUT.
All these details of OUTLIST are explained with a live example in this lecture.
A PS dataset is a sequential dataset. PS datasets can be created using option 3.2 in ISPF. It can also be created using a JCL. The whole process of creating a PS dataset using JCL is explained in this lecture, along with the code.
PDS dataset is also a sequential dataset but it is partitioned. A partitioned dataset means that it can contain many datasets inside it, just like a folder. PDS can be created using option 3.2 in ISPF. Members inside a PDS can also be created using the same way. We can also create a PDS dataset and its members using JCL. This lecture explains the whole process of creating a PDS using JCL along with the code.
PS and PDS can be easily deleted using ISPF. We can do the same using JCL as well. The whole process is explained in detail in this lecture where we first delete a PS using JCL, then a PDS and in the last the members of the PDS. The code for deletion is attached in the resources tab.
Generation Data Groups are group of related datasets. They make the creation of datasets which require similar dataset names easy. An Overview of GDG and its properties are explained in this lecture.
GDG can be created through JCL. The process of GDG creation is explained in this lecture.It explains both the GDG base creation and the creation of a generation for the same GDG. It also explains all the parameters that are required to create a JCL. The code is also provided in the resources tab.
The properties of a GDG can be changed using IDCAMS utility. This is useful when either we have made a mistake while creating a GDG or when we want to change the properties of a GDG.This is called altering a GDG. Altering a GDG is explained in detail in this lecture.
GDG has a base and many generation. How can we use one particular generation only? This problem is solved using referencing a GDG. We can use 0 for the latest generation of GDG, -1 for the second latest, -2 for the third latest and so on. Referencing a GDG in other ways is also explained in this lecture.
One particular generation of a GDG can be deleted without effecting any other generation. This is done by giving the full name of a gdg or by referencing it with a gdg number. More details are given about deleting a GDG generation in this lecture.
GDG has a base and many generations. To delete a GDG along with its base we have to use IDCAMS. We can give two options along with the DELETE command. These two options along with full code to delete GDG and all its generations is explained in this lecture.
JOBLIB is used to set the default location of all the load modules used in a JCL. It makes the searching of the load modules faster. More details about JOBLIB is explained in the lecture.
STEPLIB makes the searching of the load module faster. Load modules are first searched in the default system library, if not found then they are searched inside the STEPLIB. This lecture explains STEPLIB in detail.
JCLLIB is explained in detail in this lecture. JCLLIB is used to tell the location of the JCL code which has to be included in a JCL.
Procedures which are coded within a JCL and used within the same JCL is called an Instream procedure. Instream procedure has been explained in detail in this lecture.
Some parameters in a procedure need to be changed again and again. Such type of parameters are replaced with a symbolic parameter. We can then change the values of these parameters dynamically. Symbolic parameters are explained in detail in this lecture.
An instream procedure can't be used outside of the JCL. If we want to use a procedure outside of a JCL then we have to put it into a seperate library. This type of procedure is called a Cataloged procedure. Cataloged procedure is explained in detail in this lecture.
SET statement is used to set a permanent value for a symbolic parameter. More details are also explained along with the code in this lecture.
We sometime need to change the name of the datasets in a Procedure. To do this we can easily use a method called Overriding datasets. Overriding datasets is explained in this lecture in detail.
COND parameter can be used to decide whether to execute a step or not. COND parameter can be given in a JOB statement or an EXEC statement. The conditions given in a COND parameter are checked and based on the condition the step is executed. This lecture explains the COND parameter in detail.
If we do not give a step name in COND parameter then it will check the condition code of all the previous steps. If there is a step name then it will only check the condition code on that particular step. This lecture explains this concept in detail.
COND parameter can be given in two places. Either in the JOB statement or in an EXEC statement. When the COND parameter is given in JOB then it applies to every step in the JCL. The processing stops at the first STEP that satisfies the condition. This lecture explains in detail, how we can use COND parameter in the JOB statement.
COND can be used to never execute a step. In this lecture, I explain in detail the COND parameter that you can use to never execute a step. You can use COND=(0,LE) to always skip a step.
There are some situations in which we always want to execute a step. In such cases, we can use COND=(4095,LT). This condition is always false. Hence this step will execute always no matter what is the condition code of the previous step. This lecture explains this COND in detail.
We can use COND to execute a step in case the previous step has abended. This condition is COND=ONLY. In this case, the step is only executed if the previous step has abended. If the step has executed normally then the step will be ignored. The COND=EVEN will execute the step even if the previous steps have abended or not. So if a previous step has abended it will still execute the step. If the previous step has not abended, then still it will execute. So a step with COND=EVEN will always execute, no matter what. This lecture explains these parameters in detail.
We can use IF statement to decide whether to execute a step or not. IF statement is followed by a condition. If that condition is satisfied then the next step is executed. If the statement is not satisfied then the step is ignored. This lecture explains IF ENDIF statement in detail.
We can use IF with ELSE to execute a step based on a condition. The condition in an IF statement is checked. If the condition is satisfied then the step next to it is executed. In case the condition is not satisfied then we can use ELSE statement. The step that is coded in the ELSE statement is executed in case the condition in IF statement is not satisfied. The IF ELSE statement in JCL is explained in detail in this lecture.
Two text files are attached in the resources.
IMDB list: It is a list of over 5000 movies, documentaries, tv series etc.
Column details: It contains the column details of the IMDB list file.
Download both of them. Upload IMDB list to your Mainframe. We will use this dataset on all our SORT programs. The second dataset will be used to know the column number of the data.
In this lecture the syntax of a SORT JCL is explained in detail along with examples.
We can sort a dataset on multiple columns. In this lecture the sort of a dataset based on multiple fields is explained in detail along with examples.
SORT can be used to do a simple copy operation. The Copy of a dataset is explained in detail along with example in this lecture.
A condition can be added to select only a few records in the output. This can be done using INCLUDE keyword in the SORT. The INCLUDE keyword along with example is explained in detail in this lecture.
SORT can also be used with multiple INCLUDE statements. This lecture explains in detail the syntax to use INCLUDE with SORT.
We can OMIT some selected records from the output using OMIT keyword. OMIT keyword is explained in detail in this lecture.
Sort can be used to join two files based on a key. JOINKEYS keyword is used to join two or more such files. This lecture explains Inner Join in detail. Inner join is when only matching records are put in the output file. The matching records are called PAIRED records. Inner join is when PAIRED records are put in the output file.
Left Outer Join can be used to create an output file, where Unmatching records from the left file along with matching records are put in the output file. So a left outer join combines PAIRED plus UNPAIRED from the left file into the output. This lecture explains this feature in detail.
Right outer join is when matching records along with unmatching records from the right file is put in the output file. So in a right outer join, PAIRED records along with UNPAIRED records from right file is put in the output file. This lecture explains right outer join in detail.
Left and right outer join can be combined together to create what is known as the full outer join. In this case the PAIRED records from the files along with UNPAIRED records from both the left and right files are put in the output. Hence it is known as Full Outer Join. This lecture explains Full Outer Join in detail.
If we do not want any PAIRED records and only want unmatching records from input file then we can use JOINKEYS as well. In this case only the UNPAIRED records from the input file is copied to the output. We can copy only UNPAIRED records from left or right file. This lecture explains UNPAIRED records in detail.
UNPAIRED records from both the files can also be copied to the output. In this case, the unmatching records from both the input files will be copied to the output. The lecture explains this in detail.
This lecture is a summary of the UNPAIRED records. The non-matching records in the input files are called the UNPAIRED records. This lecture explain UNPAIRED records in detail.
OUTREC stands for out record. Out record can be used to reformat the records in the OUTPUT. OUTREC is explained in detail in this lecture.
SORT can replace the data in the output using the keyword FINDREP. FINDREP can find specific data in the dataset and can replace it with some other data. OUTREC with FINDREP is explained in detail in this lecture.
OUTFIL can be used to create multiple copies of a dataset. The OUTFIL statement along with example is explained in detail in this lecture.
One dataset can be split into multiple datasets using the OUTFIL statement. This operation is explained in detail along with examples in this lecture.
OUTFIL can also split a dataset based on a condition. The records that satisfy a specific condition will only be copied to the output. The split of a dataset based on a conditon is explained in detail in this lecture.
Utilities are simple programs that are used to perform commonly used operations on datasets or for maintenance of the system. In this lecture the different types of utilities are explained. Also, it is explained how to know whether a utility is a dataset or a system utility.
IEBCOMPR is compare datasets program. It can be used to compare two PS, PDS or PDSE datasets. IEBCOMPR is explained in this lecture. Also explained are the syntax and full JCL required to compare PS and PDS datasets.
IEBCOPY is the Library Copy utility. It can be used to Copy, Merge or Compress datasets. Introduction to IEBCOPY is given in this lecture. The syntax of IEBCOPY is explained as well.
IEBCOPY can be used to Copy all members from one PDS to another. The COPY statement is given in the SYSIN which specifies the input and output datasets. Another way to do the same operation is to use the SYSUT1 and SYSUT2 DD statements. Both the JCLs are explained in detail in this lecture.
Merge is the operation where members from several PDS are copied inside a single PDS. IEBCOPY can be used for merging PDS. Using several INDD statements can help in merging the datasets. The full syntax of the merge JCL is explained in this lecture along with an example.
We can copy only the selected members during the copy operation. The name of the selected members can be passed through the SYSIN. The full syntax of the SELECT statement along with the an example JCL is explained in this lecture.
EXCLUDE statement can be used to ignore members while doing the IEBCOPY from one PDS to another. The excluded members will be ignored and will not be copied to the output dataset. This lecture covers in detail, the exclude statement with example.
When contents of a member is deleted or the member itself is deleted, the space occupied by it is not freed. As a result there is always empty unusable space lying inside the PDS. Compression of a dataset releases this space for use by other datasets.
IEBCOPY can compress a PDS easily. This lecture explains compression using IEBCOPY in detail along with an example.
The member which is being copied using IEBCOPY can be renamed while copying. This is done using the SELECT statement. Apart from renaming we can also replace the member if it already exists. We can also combine both of these operations together to rename and replace the member at the same time.
The full process of Renaming and replacing a member is explained in detail along with the JCL in this lecture.
IEHLIST is a system utility that can be used to LIST the entries in a PDS or a VTOC. To see the properties of a PDS we can give the name of the dataset in the SYSIN DD statement along with the LIST keyword.
The whole procedure of Listing a PDS along with the JCL is provided in explained in this lecture.
Every Volume on a Mainframe has a Volume Table of Contents. This VTOC is like an index which tells the location and properties of every dataset on the volume. To see the content of a VTOC we can give the name of the volume in the SYSIN DD statement along with the LIST keyword.
The whole procedure of Listing a VTOC along with the JCL is explained in this lecture.
IEHPROGM can be used to scratch a dataset. Scratch or delete can be used to delete the dataset on the volume. Scratch however does not delete the entry in the VTOC and it has to be deleted separately using the Uncatalog statement.
This lecture explains in detail the whole procedure to scratch a dataset on the mainframe.
Uncatalog is the act of removing the catalog entry of a dataset. When we uncatalog a dataset that means the entry of the dataset in the VTOC is removed but the dataset is not deleted. The deletion of the dataset has to be done separately.
We can combine Scratch and Uncatalog in the same JCL.
The uncatalog of the dataset along with the JCL code is explained in detail in this lecture.
IEHPROGM can be used to scratch a PDS member as well. Scratch or delete can be used to delete the dataset on the volume. Instead of deleting the whole PDS only one specified member can be deleted.
This lecture explains in detail the whole procedure to scratch a PDS member along with the code.
IEBEDIT is a dataset utility that can be used to create new JOB from the existing JOB. We can copy the entire JOB or only the steps that we want to copy for the JOB. The keywords such as INCLUDE, EXCLUDE and POSITION can be used to do the copy operation.
In this lecture i explain the procedure to copy the entire JOB from one dataset to another using IEBEDIT, along with the JCL code.
Using IEBEDIT we can copy jobs from multiple datasets and paste it into the output dataset. This can be done by giving multiple datasets in the SYSUT1 ddname. All the JCLs mentioned in the SYSUT1 will be copied to the output dataset mentioned in the SYSUT2.
In this lecture the entire procedure to copy multiple jobs from multiple datasets to the output dataset is mentioned along with the JCL code.
While we are copying the steps using IEBEDIT we can select only the steps that we want to copy using the INCLUDE statement. The INCLUDE statement means that only the steps that are mentioned in the INCLUDE statement will be copied to the output. Rest all the steps in the JCL will be ignored.
In this lecture the entire procedure to copy steps using INCLUDE from one JCL to another, is explained in detail along with the code.
While we are copying the steps using IEBEDIT we can select only the steps that we want to copy using the EXCLUDE statement. The EXCLUDE statement means that the steps that are mentioned in the EXCLUDE statement will not be copied to the output. Rest all the steps in the JCL will be copied.
The INCLUDE and EXCLUDE statements can also be combined in one JCL.
In this lecture the entire procedure to copy steps using EXCLUDE from one JCL to another, is explained in detail along with the code.
While we are copying the steps using IEBEDIT we can select a POSITION in the JCL. All the steps after that Position Step will be copied to the output. The steps above the POSITION step will be ignored.
The INCLUDE, EXCLUDE and POSITION parameters can also be combined in one JCL.
In this lecture the entire procedure to copy steps using POSITION parameter is explained in detail along with the code.
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.