Sunday, March 8, 2015

read file from Content Server using DFS

/**
 *
 */
package dfs;



import java.awt.Desktop;
import java.io.File;
import java.util.List;

import com.emc.documentum.fs.datamodel.core.DataObject;
import com.emc.documentum.fs.datamodel.core.DataPackage;
import com.emc.documentum.fs.datamodel.core.ObjectId;
import com.emc.documentum.fs.datamodel.core.ObjectIdentity;
import com.emc.documentum.fs.datamodel.core.ObjectIdentitySet;
import com.emc.documentum.fs.datamodel.core.OperationOptions;
import com.emc.documentum.fs.datamodel.core.content.Content;
import com.emc.documentum.fs.datamodel.core.content.ContentTransferMode;
import com.emc.documentum.fs.datamodel.core.context.RepositoryIdentity;
import com.emc.documentum.fs.datamodel.core.profiles.ContentProfile;
import com.emc.documentum.fs.datamodel.core.profiles.ContentTransferProfile;
import com.emc.documentum.fs.datamodel.core.profiles.FormatFilter;
import com.emc.documentum.fs.rt.context.ContextFactory;
import com.emc.documentum.fs.rt.context.IServiceContext;
import com.emc.documentum.fs.rt.context.ServiceFactory;
import com.emc.documentum.fs.services.core.client.IObjectService;
import com.emc.documentum.fs.services.core.client.IQueryService;

/**
 * @author mpoli_fujitsu
 *
 */
public class DownloadFileFromEDMS {

/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
try {
SimpleContentTransfer();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}

private static String repository = "XXXX";
    private static String userName = "XXXX"; //superadmin
    private static String password = "XXXX"; //superadmin
    private static String address = "http://abcdfg:9080/services";
    private static String moduleName = "core";
    private static IServiceContext servicecontext;
    private static IObjectService objectService;
    private static IQueryService queryService;
public static void SimpleContentTransfer() throws Exception
    {
        try
        {
        ContextFactory contextfact= ContextFactory.getInstance();
    IServiceContext seriveContext=contextfact.newContext();
    RepositoryIdentity repid=new RepositoryIdentity();
    repid.setRepositoryName(repository);
    repid.setPassword(password);
    repid.setUserName(userName);
    seriveContext.addIdentity(repid);
            ServiceFactory serviceFactory = ServiceFactory.getInstance();
            objectService = serviceFactory.getRemoteService(IObjectService.class,seriveContext, moduleName, address);
         

            OperationOptions operationOptions = new OperationOptions();
            ObjectId oid = new ObjectId("09021e3c80036e0c");
            ObjectIdentitySet objectIdSet = new ObjectIdentitySet();
            objectIdSet.addIdentity(new ObjectIdentity(oid, repository));
            List<ObjectIdentity> objIdList = objectIdSet.getIdentities();

            ContentTransferProfile contenttransfer = new ContentTransferProfile();
            contenttransfer.setTransferMode(ContentTransferMode.MTOM);
            ContentProfile contentprofile = new ContentProfile();
            contentprofile.setFormatFilter(FormatFilter.ANY);
            OperationOptions op1 = new OperationOptions();
            System.out.println("sfsdfsd");
            op1.setContentTransferProfile(contenttransfer);
            op1.setContentProfile(contentprofile);
            DataPackage dp = objectService.get(objectIdSet, op1);
            DataObject dataObject = dp.getDataObjects().get(0);
            Content resultContent = dataObject.getContents().get(0);
            if (resultContent.canGetAsFile())
            {
              File file=new File(resultContent.getAsFile().toString());
            file.createNewFile();
            Desktop.getDesktop().open(file);
           
            System.out.println(resultContent.getFormat());
            }
           
        }
        catch (Exception ex)
        {
            throw ex;      
        }
    }
}

Tuesday, February 24, 2015

How does a Document Get its Default ACL?

When you create or import a new document, if you don’t explicitly assign an ACL to it, Documentum will assign an ACL by default. But how does the server know which ACL to assign? Actually, you can configure the server to choose the default ACL based on one of three methods.
·         By User: Every document created by a lawyer gets the legal ACL, regardless of what type of document it is or where it’s located. Every user has a default ACL defined in the dm_user object. You can see this by going to the Users dialog in Workspace. If you configure the docbase to assign default ACLs by user, the ACL that is assigned to a new document will be determined by the ACL that is defined in your dm_user object. The default Documentum installation will assign default ACLs based on the user’s default ACL.
·         By Object Type: Every contract gets the legal ACL regardless of who created it or where it is located. If you configure Documentum to assign default ACLs based on object type, then whenever an object is created, it will get the ACL that is assigned to the object type that you are creating. If no ACL has been assigned to the object type, the server will traverse up the object hierarchy until it finds an object type that does have an associated ACL.
·         By Folder: Every document created in a folder gets that folder’s ACL regardless of the type of document or who created it. This is probably the most meaningful way to assign default ACLs. Whatever ACL is assigned to the folder that the document is created in, this is the ACL that will be assigned to the document.
To configure the server to use one of these methods, set thedefault_aclattribute of thedm_server_configobject. Use 1 to assign default ACLs based on Folder, 2 to assign based on Object Type, and 3 to assign based on User.

Wednesday, February 18, 2015

Capital projects Unable to open/edit report templates

If you want to edit or change reports, let me give you some basic information which will help you customize them as per your choice.
Our report templates are based on jasper reports. For more information on jasper reports, refer to the following :
Our templates are populated via DQL queries in the template which are processed through a custom datasource(refer to the tutorials to understand better) that we provide to our template.
You can use the iReport designer to edit the report templates. The iReport designer version you have to use will depend on the version of the jasper reports jar that is being used in your setup.
To find out the jar version, look for "jasperreports" within the Documentum\<jboss version>\server\DctmServer_MethodServer\deploy\ServerApps.ear\DmMethods.war\WEB-INF\lib.
The version number stated there will determine what iReport Designer is compatible with EPFM.eg. jasperreports-3.7.4.jar will use iReport Designer 3.7.4.
To edit the template, you can export the template to any temp location. It will be exported in the form of a .ecsfr file. You can simply open this file using any archiving/compression(like winzip or winrar) tool to view its contents.
Look for the .jrxml file within the .ecsfr file.for eg. "Late-PendingTransmittals_001.jrxml". The .jrxml file is the report template which is populated with data.It is basically the design of the report.
It is this .jrxml file that you will need to edit using the iReport Designer.
The iReport designer will basically have to be used for designing the layout of the report. Refer to existing report templates to see how they use DQLs to populate themselves.

Once you've created a proper report template, package in a .ecsfr file(basically zip it using winzip/winrar and change the extension to .ecsfr), once this is done, import the report template.

Tuesday, February 3, 2015

XCP 2.0 Installation

Deploy xMS Agent on Apache Tomcat
This is part one of an additional guide to deploy the xms-server.war on a DIY Windows Environment. The next part describe the registration of an xCP environment manually.
If you are install xCP 2.0 on Windows you can download the compiled exe-package. The *.exe download package does not include the catalina.bat file, because the *.exe package installs tomcat as a Windows Service. The tomcat memory settings are set in the Apache Tomcat 7.0 Properties Application. The zipped package includes the catalina.bat file. The full requirements are listet on the 
D7 xMS preparation page.
1.      Stop Apache Tomcat 7.0 Services
2.      Copy the xms-server.war file in the Tomcat 7.0 webapps folder
3.      Rename the *.war extension to *.zip and extract the file and delete the original *.zip file afterwards
4.      Open cmd – the windows command center
navigate to \webapps\xms-server\WEB-INF
xcopy xms-agent-context.xml xms-server-context.xml
ensure that the xms-agent-context and the xms-server-context files are equal
<bean id="xmsServerConfigurator" class="com.documentum.xms.server.XmsServerConfiguration"> <!-- Whenever an xMS agent is being deployed, set applicationType to "AGENT_TYPE" --> <property name="applicationType" ref="AGENT_TYPE"/> </bean>
5.      Open the Apache Tomcat 7.0 Properties Application and set:
-XX:MaxPermSize=2048m
Apache Tomcat 7.0 Properties
6.      Start Apache Tomcat Services
wait >1 minutes until the xms-server is completely deployed
7.      check the deployment: http://<localhost>:<8080>/xms-server
port 8080 for non SSL / port 8443 for SSL connection
Testwebpage to check the xMS Agent and xMS registered environment
The heart of the xCelerated Management System version 1.0 are the xms-tools.
All files located in the bin folder of the xms-tools-1.0 package.
The tool include the Command Line Interface (CLI) , the xMS Agent, the xMS Server and some batch files.
The xms.bat is called CLI in the documentation and located in the mainbin folder. Ensure you have set thexms-server.properties file correctly (located in the folder config). The CLI is not only a small administrative command tool, you can display all environments (show-environments) or one in detail (show-environment-details –name test_xcp) , import and export the configuration, show/create/disable users and execute the deployment into the registed environment.
xMS Tools version 1.0
2013 © Mr.Crazyapple
The xMS Server is only available, if you have installed the vFabric tc Server in your VMWare environment.
The xMS Agent reffered to the xms-env-registration-ui.bat. In this user-interface you can add all your available Documentum components (eg.: Repository, CTS, CIS, BAM, etc). This Agent make your xCP Applications in your environment available.
This screenshot shows all the commands that are executed in the CLI.
The xMS Agent reffered to the xms-env-registration-ui.bat file. In this user-interface you can add all your available Documentum components (eg.: Repository, CTS, CIS, BAM, etc).
The manual registration is only necessary if you don’t use the xMS Server and the blueprint installation.
This Agent make your xCP application in your do-it-yourself-environment available.
If the utility is accessing the xMS Agent for the first time, it prompts you to set a password for the
default admin user. Use this password for subsequent access to the utility.
You do not have to create additional system accounts, because there can be only one SERVICE account per environment.
You can use the SERVICE_ENDPOINT system account type, if you want to create a generic account for use across several vDCs. SERVICE_ENDPOINT is specific to each environment, and you need to specify it in the environment profile XML file for each environment.
This screenshot was taken from the xMS Agent environment registration utility.
Register your environment manually. Create a new environment blueprinte file and register all the recommended Documentum components:
Repository: Content Server Ver.7.0
BAM: BAM-Server 2.0
App Host: Tomcat Server 7.0.29
Documentum Administrator: DA 7.0
Search: xPlore 1.3
BPS: Process Integrator 2.0
Content Intelligence Server: CIS 7.0
Content Transformation Server: CTS 7.0
(optional) Thumbnail Server 7.0
This screenshot displays the registration of the primary repository. You can add as many repositories you like.
Save your settings and restart the xMS Agent. Remember: the agent is part of the app server (Tomcat) webapps. Please restart the Tomcat services on your host windows server.
Wait until all webapps was successfully (re)deployed.


Thursday, January 8, 2015

Documentum Disaster Recovery

When developing a disaster recovery architecture and plan for any enterprise application, two factors come into play: recovery time and recovery point.  Recovery time is the amount of time it takes to restore the system to working order.  Recovery point is the point in time for which the data can be restored.  These requirements must be gathered prior to developing a disaster recovery plan (and ideally, before developing the system infrastructure).  For a single service application (i.e. Relational Database), after the RT and RP values have been determined, implementing them becomes a straightforward affair.  However, for an application which is comprised of several, interdependent services (i.e. Documentum Content Server), each service must have a separate DR strategy, and the culmination of those strategies must meet the application’s RT and RP values.  Let us take a high level look at the services that comprise a typical Documentum application and what their inter-service dependencies are.
At the heart of any Documentum application is a relational database.  The DR strategy may be to use some vendor provided clustered database, to have a second database that syncs with the first, or to simply recover from a cold backup.  The Recovery Point of the database must be older (less current) than that of the content storage area to minimize the likelihood of content objects missing their associated content files.  This is a soft rule.  If, for the application, the metadata is high value and the content data is low value, this rule can be broken.
Next let us look at the Content Storage area.  Again, the most common DR strategies are clustering and replication.  As we mentioned earlier, the content storage area of a recovered system should be more current than the RDBMS.  This way, all content bearing objects will have their content available.  Note that when files are updated, they are versioned, and the old files are not overwritten.  When files are deleted, their associated content is not, until a file clean job is run.
In order for an Index Server to perform its duty, it must have a valid, up to date index.  In DR situations, this is not easy to grantee.  One method for dealing with DR for an Index Server is to simply restore the Index Server but not the associated index.  This will require the Index Server to re-index the content (and FT functionality will be limited during this time), but will alleviate concerns over validity of the Index.  Other strategies include restoring both the IS and the Index.  Realize that if the Index is more current than the Content Server’s Restore Point, it may have invalid indices (documents that do not exist in the content server).  If the Index is less current than the restored Content Server, documents may have been marked as being indexed, but not actually be indexed.  This can be mitigated by performing a re-indexing on documents that may have been added or modified during the delta.
Thankfully, Content Server and Application Servers are relatively stateless, and therefore can be restored to any point that has all of the custom code and configurations that coincide with the RP of the CS.  Simply starting the apps should suffice to restore the system to working order.
This post highlights the complexity of Documentum DR; the devil always lies in the details.  Do not try to implement DR without verifying the infrastructure architecture, installation and configuration procedures, and redundant hardware in other environments

Thursday, November 27, 2014

Quickflow

Quickflow is a predefined workflow that doesn'texecute any actions on the document. It sends the document from oneperson to another. The Quickflow has been part of the Serverproduct since the first version of 4i. In previous versions ofServer it was not called Quickflow. The template name for theQuickflow is sendToDistributionList which resides in the Systemcabinet.

Here are some steps to identify the workflow that thedocument belongs to:
- Get the task name -- You can get the task name by lookingat the inbox or querying the dmi_queue_item table

- Once you have the task name, run the following queries:

select object_name from dm_process where r_object_id in(select process_id from dm_workflow where r_object_id in (selectr_object_id from dmi_package where r_package_name ='<name of thepackage>'));

The result will be the name of the workflow associated to thedocument. Nevertheless, the Quickflow will always use thesendToDistributionList workflow.

Wednesday, November 26, 2014

Captial Projects File -> new greyed out in Project Configuration area or unable create picklist and other config objects



There will be reason like

1.Permission issues
2.membership issues
3.Config types missed in Content Rules of the project config folder.

But in my case the I have seen funny and intresting issue with objectlist component which is hidden configuration.

You can not see any error messgae logs because it is not error.

In the object list component we have configured columns based on folder path for the projects like below

<scope
                                location="/PROJECTS/PTDR-ABC/04. Master Documents/01. PROJECT COMMUNICATION,
    /PROJECTS/PTDR-ABC/04. Master Documents/01. Forms,">
                                <component id="objectlist"
                                                extends="objectlist:eif/config/library/objectlist/objectlist_component.xml">


unfortunately  there was comma”,” before close scope tag <01. Forms,"> so we have remove it and system behaving as expected