Tivoli IAM Tech Help
Tuesday, 3 September 2013
Monday, 29 October 2012
WebSEAL External Authentication Interface
External Authentication Interface
(EAI)
By Siva R Praturi
External
authentication interface extends the functionality of WebSEAL authentication
process. It allows third-party systems to supply an authenticated identity to
WebSEAL. This technique enables additional functionality beyond what WebSEAL is
designed to do. EAI can be used with
applications written in any language including Java.
EAI
process flow
EAI is a mechanism to outsource the
responsibility for authentication from WebSEAL to a third party product /
application. The way it works is shown in the following diagram.
The diagram describes the following
process flow:
1. The user attempts to connect to protected application
on back-end server. Webseal redirects the request to EAI server, which may be
on a separate computer from WebSEAL.
2. WebSEAL allows unauthenticated access to the EAI
server. This is necessary, because the user is not authenticated at this point.
3. The user and the EAI server communicate. This communication
can be as long and as involved as necessary.
4. The user, based on an HTML page from the EAI server,
retrieves a trigger URL, which is a URL that is configured in
WebSEAL as one that might contain the EAI output.
5. The EAI server sends back a reply, which has an HTTP
header that contains the user identity and possibly additional information.
6. WebSEAL creates the credential for the user.
7. WebSEAL allows the user to access a back-end server.
Configuring WebSEAL for EAI
Add the
authentication mechanism library
The
list of libraries used for authentication is in the [authentication-mechanisms]
stanza of the WebSEAL configuration file. To enable EAI, add the following
line (all on one line):
• ext-auth-interface
= /opt/pdwebrte/lib/libeaiauthn.so
The [eai]
stanza
The eai-auth
stanza entry in the [eai] stanza of the WebSEAL configuration file
enables or disables external authentication interface. To
use EAI for HTTP(S) connections, use this line to set the eai-auth value:
• eai-auth =
http/https/both
You also must specify the name of the HTTP Headers to match
those from your application
• eai-pac-header
= am-eai-pac
•
eai-pac-svc-header = am-eai-pac-svc
The
[eai-trigger-urls] stanza
This stanza specifies the trigger URLs. A trigger
URL is a URL whose response can include the EAI server’s reply in HTTP
headers. Trigger URLs can also be specified using a wildcard pattern.
• trigger =
/eailogin/cgi-bin/eai*.pl
Server junction and
access control list
WebSEAL
sees the EAI server as another HTTP server. To allow users to access this HTTP
server, WebSEAL requires a junction. Use the following pdadmin command
to create the junction. Note that the command is all one line.
s t <instance>-webseald-<webseal computer> create -t tcp-h <eai computer> /eailogin
Users are unauthenticated
while they are communicating with the EAI server. To allow unauthenticated
access, run the following pdadmin commands. Ignore error message HPDAC0757E
about ACL permissions when you get it.
acl create
eaiacl
acl modify eaiacl
set any-other Trx
acl modify
eaiacl set unauthenticated Trx
acl attach
/WebSEAL/<webseal
computer>-<instance>/eailogin eaiacl
Thursday, 25 October 2012
Automating WebSEAL junction management
Automating WebSEAL
junction management
By Siva R Praturi
Tivoli Access Manager
for e-Business is a Web single sign-on and access management solution. Tivoli Access Manager WebSEAL is the resource manager
responsible for protecting Web-based resources. The
most common deployment model uses WebSEAL to protect Web applications. WebSEAL
junctions are WebSEAL’s link to the
back-end resources in the environment. This connection is how WebSEAL knows
where the applications are in the environment. Attributes
of a junction include the Web server location (hostname, port and protocol)
along with a number of other options that control how the Web server is
accessed and how its content is processed by WebSEAL.
Content on the Web
servers is then accessed via the WebSEAL server hostname, and with an
additional path prefix. For example, a WebSEAL junction pointing to a WebSphere
Application Server might have been created with the junction name of
"/was". A user would then access
"http[s]://<webseal-server>/was/" to access the root of the
WebSphere Application Server content, rather than
"http[s]://<was-server>/".
Management of WebSEAL
junctions is performed using the standard IBM Tivoli Access Manager for
e-Business administration interfaces, namely:
- Web
Portal Manager: a browser-based application
- pdadmin: a command line program
Creating a single junction is a
simple task, but WebSEAL clusters,
Configuration migration, Disaster recovery etc factors in a real IBM
Tivoli Access Manager for e-Business environment complicate the larger
management picture. So it is worth considering the following in any Tivoli Access Manager solution.
- Manage junction definitions across a range of environments, for example, development, system test, and production.
- Provide a mechanism to simplify the administration of WebSEAL junction definitions.
- A
supported method for junction management is used.
- Changes
are immediate and do not require that the WebSEAL server be restarted.
- Familiar
commands are easy for experienced IBM Tivoli Access Manager for e-Business
administrators to read.
- Configuration errors are detected when the commands are processed.
I am sure all of you agree that, it is not a
difficult task to automate webseal junction management using scripting support when
we know related pdadmin server task command and its options. Below are some
thumb rules which you can follow to make deployments across environments easier.
- Define
a properties file for every webseal junction with all required values
(e.g. userid, password, webseal instance name, host, port etc)
- Create deploy-junctions script which reads properties file and invokes webseal-junction-create script.
- Create destroy-junctions script which reads properties file and invokes webseal-junction-delete script.
You can also think of automating ‘Objectspaces’,
‘ACLs’ etc on similar lines after
creating junctions. I have done this exercise and it saves ample amount of time
during deployment.
Monday, 3 September 2012
DB2 Backup
DB2 Backup
By Vamshidhar K
Data loss and corruption are unfortunate realities which need
to be handled proactively.
DB2 provides set of commands backup and recovery. The
following section explains some basic details about the DB2 Backup. The two
possible types of backup are:
·
Offline Backup: The
offline backup can only be taken while the database is inactive and not being
used at the time of backup. The database is consistent upon restore without
applying any transaction logs.
·
Online Backup: The
online backup can be taken while the database is in active and is in use. Database
is inconsistent upon restore and log files required to get the database to
consistent state.
To backup a database use DB2
BACKUP DB database_name command. It looks relatively simple. When size of
the database grows i.e. to few hundred GB of data then we need to know various
options available in DB2 BACKUP. In
recent past, I came across such situation and some options that I tried and
found useful are listed below.
a)
Backup a
database redirecting the output to two different directories
DB2 BACKUP DB database_name TO output_directory1, output_directory2
b)
Backup a
database with compression
DB2 BACKUP DB database_name
COMPRESS
c)
Backup a
database with compression and redirecting the output to two different
directories
DB2 BACKUP DB database_name TO output_directory1, output_directory2
COMPRESS
Note:
-
Buffers and
parallelism are used to enhance the performance. If you do not specify, DB2
will select optimal values. Use this keyword with above command if you would
like to specify buffers and parallelism. WITH 2 BUFFERS BUFFER 4096 PARALLELISM 4.
-
Use can also use WITHOUT
PROMPTING option if you do not wish to see
any user intervention during backup
Size of the DB2 Backup file & Time taken for DB2 Backup
are two important parameters one would like to know during DB2 BACKIUP
activity. The following table gives some
statistics collected while performing backup of database of size 77.6 GB. Please note these
statistics may vary based on the performance of the system, data and other environmental
factors that affect the overall backup activity.
Sl. No.
|
Database
Size in GB
|
Time in
minutes
|
Backup
is compressed?
|
Backup
File
Size
in GB
|
Parallelism
|
Backup
to multiple destinations?
|
Destination
Different from Source?
|
1
|
77.6
|
90
|
Yes
|
27.2
|
1
|
No
|
No
|
2
|
77.6
|
52
|
Yes
|
27.2
|
2
|
No
|
No
|
3
|
77.6
|
49
|
Yes
|
27.2
|
4
|
No
|
No
|
4
|
77.6
|
50
|
Yes
|
14.1,13.1
|
2
|
Yes
|
Partial*
|
5
|
77.6
|
19
|
No
|
77.6
|
2
|
No
|
Yes
|
6
|
77.6
|
23
|
No
|
40.2,
37.8
|
2
|
Yes
|
Yes
|
* Partial - one directory is on source drive.
Below are my observations from this activity
- DB2 BACKUP without COMPRESS option is quick but creates backup file size almost equivalent to size of the DB.
- DB2 BACKUP with COMPRESS option takes longer duration but creates relatively low size backup file. Overall backup duration can be reduced by adjusting Buffers and parallelism
To check the status of the database
backup activity, issue below command from another db2cmd window.
DB2 LIST UTILITIES SHOW DETAIL
This will list the details about the current backup activity. See the
below example:
ID = 2882
Type = BACKUP
Database Name = database_name
Partition
Number = 0
Description = offline db
Start Time = 22-08-2012 11:00:05.522606
State = Executing
Invocation
Type = User
Throttling:
Priority = Unthrottled
Progress
Monitoring:
Estimated Percentage Complete = 27
Total Work = 83709106173 bytes
Completed Work = 22903402547 bytes
Start Time = 22-08-2012 11:00:05.522645
|
Wednesday, 29 August 2012
Writing Java Extensions in ITIM
Writing Java Extensions in ITIM
By Siva Praturi
The Identity Manager provisioning platform is designed with
extensibility as a primary goal. Below are few typical scenarios in which we generally
extend ITIM capability to meet business requirements.
-
Generate
UniqueIds during provisioning accounts
-
Adding
custom debug messages to ITIM log
-
Custom
approval process that is determined by looking up an approver in a database
You can extend ITIM workflows in two ways: create a workflow extension in Java that can be
called as a regular operation, or extending the
JavaScript Engine with Java
To Extend JSEngine, One method to write an extension in java that is
called from Tivoli Identity Manager (TIM), is to add a
new custom class into the application and then call it from Javascript.
There is an alternative method, which uses classes that implement the com.ibm.itim.script.ScriptExtension interface.
This method allows extensions to be limited to specific TIM components and
access context information such as variables. This method is more complicated.
Developing Java Extensions in ITIM
Below figure shows typical deployment of custom Java
Extensions in ITIM.
Steps involved in writing Java Extensions in ITIM are
- Develop
& build custom java code as per business requirement.
- Deploying
the new extension.
-
Updating the Application Server's Classpath
-
Registering the JavaScript Extensions
- Test Java Extensions by calling them from ITIM workflows.
Updating the Application Server's Classpath
TIM is a J2EE application, running on top of IBM WebSphere
Application Server (WAS). To add a Java class to the application, follow these
steps:
1. Compile the Java code into a class file. To use the TIM
API, put $ITIM_HOME/lib/itim_server.jar the class path
2. Include the Java class and properties file into a Java
archive (JAR) file.
3. Log on to the WAS integrated services console. Perform
step 4 or step 5 to add jar file to classpath
4. Create a new shared library that includes the JAR file
you created. Modify the ITIM application: Change the Shared library references
to add the new shared library.
5. Another option is to Expand ‘Environment’ menu and click
on ‘Shared Libraries’. Click on ‘ITIM_LIB’ shared library. Under ‘General
Properties’, append in (JAR) file in list of Classpath property.
6. Save the modified WAS configuration.
Registering the JavaScript Extensions
There are the steps to call the new class from JavaScript:
1. Edit this file:
$ITIM_HOME/data/scriptframework.properties
In the file, add a property that starts with ITIM.java.access whose value is equal to
the name of the new class. For example: ITIM.java.access.test=com.ibm.tivoli.javaext.TestClass
2. Restart WebSphere Application Server
3. Use the class in JavaScript within ITIM to test it. For example:
define a variable test in identity
policy and request an account.
var test = new
com.ibm.tivoli.javaext.TestClass()
Note: To learn more on ITIM Extensions, look at the demonstration
code at $ITIM_HOME/extensions/5.1/examples/directory.
Thursday, 16 August 2012
Tivoli Access Manager Tracing
Tivoli Access Manager Tracing
By Siva
Praturi
IBM Tivoli Access Manager provides configurable tracing
capabilities that can aid in problem determination. Tracing
can be activated either through a routing file, or through pdadmin server task
trace command. Trace files are
required to assist support personnel in diagnosing problems occurring with the
functioning of the Tivoli Access Manager software.
Using routing files
A routing file can be used for enabling and disabling
trace. The routing file is a file that can be used to define the name,
location, and logging behaviour of certain message log and trace log files. The
Tivoli Access Manager Base and WebSEAL components each have their own routing
(or routing.template) files defined within their respective etc directories.
The contents of a routing file are fairly
self-descriptive. When using a routing file to affect trace logging or message
logging, you must stop and restart Tivoli Access Manager Component for the
routing file change to take effect.
Using pdadmin trace utility
The pdadmin server task trace command can be used to
dynamically control trace operations for the Tivoli Access Manager
authorization server, WebSEAL, and the Tivoli Access Manager Plug-in for Web
Servers. Trace utility allows you to capture information about error conditions
and program control flow in Tivoli Access Manager Components. This information
is stored in a file and used for debugging purposes.
Tracing for the Tivoli Access Manager policy server cannot
be controlled dynamically with the pdadmin server task trace command. You must
use the routing file to enable tracing for the policy server. The policy server
must be restarted for any routing file modifications to take effect.
Let’s
take a look at the tracing system in Tivoli Access Manager and some of the less
complex WebSEAL and WebPI trace points in detail.
Trace elements
There are
two elements within the trace system used to control the activation of trace
statements. These are the trace component and the trace level.
Trace component: The trace within Tivoli Access
Manager is organized into trace components. It is important to select the
appropriate trace component to troubleshoot the problem area. The trace
components themselves are organized in a hierarchical fashion. If trace is
activated for a parent trace component, it will automatically be activated for
all children trace components.
Trace level: The amount of detail that is
produced for a particular trace component is governed by the trace level that
is selected. The trace level is a single integer within the range of 1-9, with
9 reporting the most amount of detail and 1 reporting the least amount of
detail.
Trace output generally consists of A time stamp for the trace entry,
ID of the thread, Name of the trace component, Name of the product source file
and
Trace
text
Below
figure illustrates the process flow for pdadmin server task trace command
Listing trace components
To list
all of the trace components offered by a server, issue the trace list command:
server task <server-name> trace
list
Adjusting the trace level of a
component
To change
the level and destination for a specific trace point, use the following
command:
server task <server-name> trace
set <component> <level> [file path=file|other-log-agent-config]
Where component is the name of trace component as shown by
the list command. The level will control the amount of detail to be gathered,
in the range of 1 to 9. The optional file path parameter specifies the location
for trace output. If this parameter is not supplied the trace output will be
sent to the stdout stream of the server.
Retrieving the current trace level
of a component
To show
the names and levels for all enabled trace components use the following
command:
server task <server-name> trace
show [component]
If the
component parameter is omitted the output will list the name and level of all
of the enabled trace components.
Generally used Trace Components with WebSEAL & WebPI
pd.ivc.ira
|
The pd.ivc.ira component is used to trace
the Tivoli Access Manager interaction with the LDAP server. As such, it is a
trace component that can be used with WebSEAL or PDWebPI. The trace is useful
in determining problems that occur during authentication.
|
pdweb.debug
|
The pdweb.debug component is used to trace
the HTTP headers sent between the client and WebSEAL. This includes the
headers contained within the request, as well as the response.
|
pdweb.snoop.client
|
The pdweb.snoop.client component is used
trace the HTTP packets which are transmitted between WebSEAL and the client.
|
pdweb.snoop.jct
|
The pdweb.snoop.jct component is used trace
the HTTP packets that are transmitted between WebSEAL and the junctioned
back-end Web server.
|
pdweb.wan.azn
|
The pdweb.wan.azn component is used to trace
the authorization decision for all transactions. This includes details
related to the credential upon which the authorization decision is made, the
resource that is being accessed, as well as the result of the authorization
decision.
|
pdweb.wns.authn
|
The pdweb.wns.authn component is used to
trace details concerning the authentication process applied by WebSEAL. This
includes information such as the authentication mechanism, the details used
during the authentication process, and the result of the authentication.
|
pdwebpi.azn
|
The pdwebpi.azn component is used to trace
the authorization decision for all transactions.
|
pdwebpi.proxy-cmd
|
The pdwebpi.proxy-cmd trace component can be
used to examine these commands, and from this an administrator can derive
what the proxy component is instructing the Web server to do with each
request.
|
pdwebpi.request
|
The pdwebpi.request component is used to
trace the HTTP requests that are received by the system.
|
pdwebpi.session
|
The pdwebpi.session component is used to
trace details pertaining to a user's session. In particular, it will trace
the contents of the user's session along with session expiration details and
any changes that might be made to a user's session.
|
Note: Use trace with caution. It is
intended as a tool to use under the direction of technical support personnel.
Messages from trace are sometimes cryptic, are not translated, and can severely
degrade system performance.
Tuesday, 17 July 2012
Backup and Restore Tivoli Directory Server
Overview of backup and restore procedures
for Tivoli Directory Server
By Siva Praturi
Tivoli Directory Server provides multiple methods for
backing up and restoring directory server instance information. There are
methods that back up the complete information for a directory server instance,
and methods that back up only the data in the database. You can back up and
restore Tivoli Directory Server using below options and they have their
advantages and disadvantages.
- DB2
backup (db2 backup)
and restore (db2
restore) commands
- Tivoli
Directory Server backup (idsdbback) and restore (idsdbrestore) commands
- Tivoli Directory Server tools db2ldif and ldif2db.
Choosing an appropriate
backup method is a very important decision in Tivoli Directory Server environments.
In my view, it is always a safe side option to choose both automated LDIF export and database backups with LDIF
export being stored on a DISK drive and database backup on TAPE device.
DB2 backup and restore commands
The db2 backup and db2 restore commands are provided by IBM DB2. The advantage to using these
commands is performance and flexibility for specifying the location of the
database files. The db2 restore command can be used to distribute the
database across multiple disks or to simply move the database to another
directory.
The disadvantage to the db2 backup and db2 restore commands is their complexity. Another
disadvantage is the potential for incompatibility in backing up and restoring
across platforms and across DB2 versions.
An
important consideration when using db2 backup and db2 restore commands is the preservation of DB2
configuration parameters and system statistics optimizations in the backed-up
database. The restored database has the same performance optimizations as the
backed-up database.This is not the case with LDAP db2ldif, ldif2db, or bulkload.
idsdbback and idsdbrestore commands
The idsdbback and idsdbrestore commands are provided by Tivoli
Directory Server to back up both the DB2 database
and the directory server configuration. The advantage to using these commands
is the backing up of the directory server configuration.
The other ways of backing up the directory
server do not include the directory server configuration information. This
information includes directory schema, configuration file and key stash file.
Although it is possible to back these files up manually, the idsdbback andidsdbrestore commands
make this task easier. However,
it is important to note that idsdbback command can be used only when Tivoli
Directory Server is not running.
The disadvantage to using the idsdbback and idsdbrestore commands is less flexibility in how
the underlying DB2 restore is performed. For example, using the idsdbrestore command, the DB2 restore cannot be
directed to distribute the database across multiple disks. Another disadvantage
is the potential incompatibility in backing up the database on one platform
(for example AIX) and restore on another platform (for example Windows).
db2ldif, ldif2db, and bulkload commands
The db2ldif and ldif2db commands are provided by Tivoli
Directory Server to dump or restore the database to or from a file in
Lightweight Directory Interchange Format (LDIF). The advantage to using these
commands is portability, standardization and size. It is important to note that
these tools do not
preserve the DB2 configuration parameters and database optimizations
parameters.
The output LDIF file can be used to restore a directory server on
a different platform and possibly with some modification on a different directory
server product, for example the Sun ONE Directory Server. The disk space
requirement for the LDIF output from db2ldif is approximately 6.5 times less than
the backed up database from db2 backup or idsdbback.
The disadvantage to using db2ldif and ldif2db is speed as compared to db2
backup and db2
restore. Because the db2 backup and restore commands are essentially a disk copy,
they are the fastest alternative.
For restoring the database, the bulkload utility is many times faster than ldif2db command, yet is still much slower than db2
restore. Thebulkload utility takes approximately 12 times
longer than a db2 restore. In order to restore a
directory server using bulkload, the directory server must
be empty. This is accomplished by restoring an empty database or by
unconfiguring, then reconfiguring the directory server.
Choosing an appropriate backup method
What are the factors to consider in choosing a backup method?
The first choice in backup methods is between exporting data
to LDIF and using a database backup. There are some advantages to exporting to
LDIF:
- For directories with no more
than a few hundred entries, it might be more time and space efficient to
export the entries to LDIF as a backup, because there is extra overhead
for a full database backup.
- The directory server can be
running and accepting updates during the export to LDIF.
- LDIF is a portable format, so
if you need to move data from an LDAP directory on Windows to one on AIX,
using LDIF is the best approach.
- LDIF is also vendor neutral, so if you decide to move your data from some other vendor’s directory server to the Tivoli Directory Server, then exporting to LDIF is a good way to go.
Now, if you are not moving data
between directories and you have more than a few hundred entries, you would be
better off doing a database backup. For large directories, it is much faster to
do a database backup than to export all the data to LDIF. A database backup has
these advantages:
- It is faster and more efficient
for directories with thousands to millions of entries.
- It saves the database
configuration settings, including those for the underlying tablespaces.
- DB2 provides the options to do
either offline or online backups. It includes an option to do full backups
or delta backups.
- Your saved backup images and the transaction logs allows for recovery right up to the last transaction completed prior to a disk crash.
So, database backups are
usually the best approach, but you still need to decide if online or offline
backups will work best for you. Doing offline backups is the simplest approach.
It requires less administrative activity, and if you can afford to stop a
directory server long enough to do a periodic backup, then it is probably the
best solution for you. On the other
hand, if you cannot afford to stop a directory server to take a backup, then
online backups are the way to go. The extra administrative overhead when a
database is configured for online backups is related to the accumulation of
transaction logs that necessitates some management of them.
Subscribe to:
Posts (Atom)