Tuesday, May 18, 2010

Enhanced POM for JBoss RESTEasy Twitter API Client Sample

I recently looked at JBOSS RESTEasy as a way to create and test RESTful APIs. The platform looks very promising with a lot of praise from developers. Also the documentation seems very extensive and precise.

I started by downloading RESTEasy 1.2.1 GA and tried the sample code. I started with a java client to access existing RESTful Web Services and APIs. Among the api-clients, there is a Twitter small client that works out-of-the box (located under /RESTEASY_1_2_1_GA/examples/api-clients/src/main/java/org/jboss/resteasy/examples/twitter).

However when I started to extract the code and wanted to create a Maven 2 based stand-alone project, I encountered some issues related to JAR dependency conflicts, including the following error message also described here.

java.lang.NoClassDefFoundError: Could not initialize class com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl 

The project (eclipse) structure looks as below:


















I managed to fix these issues by modifying the POM file as follow:

<?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
 
    <groupId>org.jboss.resteasy.examples</groupId>
 <artifactId>api-clients</artifactId>
 <version>1.2.1.GA</version>
  
  <dependencies>
    <!-- Resteasy Core -->
    <dependency>
      <groupId>org.jboss.resteasy</groupId>
      <artifactId>resteasy-jaxrs</artifactId>
    </dependency>
    <!-- JAXB support -->
   <dependency>
      <groupId>org.jboss.resteasy</groupId>
      <artifactId>resteasy-jaxb-provider</artifactId>
   </dependency>
    
  </dependencies>
  <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.jboss.resteasy</groupId>
                <artifactId>resteasy-bom</artifactId>
                <version>1.2.1.GA</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
   </dependencyManagement>
   
   <!-- Build Settings --> 
   <build>
    <plugins>  
      <plugin>
        <artifactId>maven-compiler-plugin</artifactId>
        <configuration>
          <source>1.6</source>
          <target>1.6</target>
        </configuration>
      </plugin>
    </plugins>
   </build>
  
  <!-- Environment Settings -->
  <repositories>
    <repository>
      <id>jboss</id>
      <name>jboss repo</name>
      <url>http://repository.jboss.org/maven2</url>
     </repository>
   </repositories>
  
</project>

The most important piece, beside the cleaning of the POM file, was to include a pom that can be imported so the versions of the individual modules do not have to be specified (see RESTEasy documentation - Chapter 43. Maven and RESTEasy).

I also made sure to have correct dependencies for resteasy-jaxrs and resteasy-jaxb-provider.

As a result, I was able to compile the whole project without any errors (mvn clean compile) and run it to access the Twitter REST API

mvn exec:java -Dexec.mainClass="org.jboss.resteasy.examples.twitter.TwitterClient" -Dexec.args="<userid> <password>"
(Replace last parameters by your twitter user and password).

The small client in question leverages JAX-RS annotations to read and write the Twitter API resources:

package org.jboss.resteasy.examples.twitter;

import java.util.Date;
import java.util.List;

import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;

import org.apache.commons.httpclient.Credentials;
import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.UsernamePasswordCredentials;
import org.apache.commons.httpclient.auth.AuthScope;
import org.jboss.resteasy.client.ProxyFactory;
import org.jboss.resteasy.client.ClientExecutor;
import org.jboss.resteasy.client.core.executors.ApacheHttpClientExecutor;
import org.jboss.resteasy.plugins.providers.RegisterBuiltin;
import org.jboss.resteasy.spi.ResteasyProviderFactory;

public class TwitterClient
{
   static final String friendTimeline = "http://twitter.com/statuses/friends_timeline.xml";

   public static void main(String[] args) throws Exception
   {
      RegisterBuiltin.register(ResteasyProviderFactory.getInstance());
      final ClientExecutor clientExecutor = new ApacheHttpClientExecutor(createClient(args[0], args[1]));
      TwitterResource twitter = ProxyFactory.create(TwitterResource.class,
            "http://twitter.com", clientExecutor);
      System.out.println("===> first run");
      printStatuses(twitter.getFriendsTimelines());
      
      twitter
      .updateStatus("I programmatically tweeted with the RESTEasy Client at "
            + new Date());
      
      System.out.println("===> second run");
      printStatuses(twitter.getFriendsTimelines());
   }

   public static interface TwitterResource
   {
      @Path("/statuses/friends_timeline.xml")
      @GET
      Statuses getFriendsTimelines();

      @Path("/statuses/update.xml")
      @POST
      Status updateStatus(@FormParam("status") String status);
   }

   private static void printStatuses(Statuses statuses)
   {
      for (Status status : statuses.status)
         System.out.println(status);
   }

   private static HttpClient createClient(String userId, String password)
   {
      Credentials credentials = new UsernamePasswordCredentials(userId,
            password);
      HttpClient httpClient = new HttpClient();
      httpClient.getState().setCredentials(AuthScope.ANY, credentials);
      httpClient.getParams().setAuthenticationPreemptive(true);
      return httpClient;
   }

   @XmlRootElement
   public static class Statuses
   {
      public List<Status> status;
   }

   @XmlRootElement
   public static class Status
   {
      public String text;
      public User user;

      @XmlElement(name = "created_at")
      @XmlJavaTypeAdapter(value = DateAdapter.class)
      public Date created;

      public String toString()
      {
         return String.format("== %s: %s (%s)", user.name, text, created);
      }
   }

   public static class User
   {
      public String name;
   }

}


The small DateAdapter class is a utility class for date formatting:

package org.jboss.resteasy.examples.twitter;

import java.util.Date;
import java.util.List;
package org.jboss.resteasy.examples.twitter;
 
import java.util.Date;
import javax.xml.bind.annotation.adapters.XmlAdapter;
import org.jboss.resteasy.util.DateUtil;

public class DateAdapter extends XmlAdapter<String, Date> {

   @Override
   public String marshal(Date date) throws Exception {
       return DateUtil.formatDate(date, "EEE MMM dd HH:mm:ss Z yyyy");
   }

   @Override
   public Date unmarshal(String string) throws Exception {
       try {
           return DateUtil.parseDate(string);
       } catch (IllegalArgumentException e) {
           System.err.println(String.format(
                   "Could not parse date string '%s'", string));
           return null;
       }
   }
}

Friday, April 16, 2010

SOA and Health Care Meaningful Use requirements of the Recovery Act


The Interim Final Rule of the Health Information Technology for Economic and Clinical Health (HITECH) Act was passed by Congress in February of 2009.  Under this act, eligible providers will be given financial rewards if they demonstrate "meaningful use" of "certified" Electronic Health Record (EHR) technologies.

Therefore there is a big incentive for health care vendors to offer solutions that meet the criteria described in the law.  More precisely, the associated regulation provided by the Department of Health and Human Services describes the set of standards,  implementation, specifications and certification for Electronic Health Record (EHR) technology.


As a Software Architect, I was curious to see whether Service Oriented Architecture (SOA) or Web Services in general were mentioned in these documents.

The definition of an EHR Module includes an open list of services such as electronic health information exchange, clinical decision support, public health and health authorities information queries, quality measure reporting etc.

In the transport standards section, both SOAP and RESTful Web services protocols are described. However Service Oriented Architecture (SOA) is never explicitly described or cited. No reference how these services might be discovered and orchestrated in a "meaningful way". I would assume that the reason is that the law makers and regulators wanted to be as vague as possible on the underlying technologies for an EHR and its components.

The technical aspect of "meaningful use" is specified more precisely when associated with interoperability, functionality, utility, data confidentiality and integrity of the data, security of the health information system in general.

These characteristics are not necessarily specific to SOA, but to any good health care software and solution design.

Still, the following paragraph seems to describe a solution that could be best implemented using a Service Oriented Architecture: "As another example, a subscription to an application service provider (ASP) for electronic prescribing could be an EHR Module"  where software is offered as a service (SaaS).  This looks more like the description of an emerging SOA rather than a full grid enabled SOA.

It will be up to the solutions providers to come up with relevant products and tools to maximize the return on investment (ROI) of the tax payer's money and the professionals and organizations eligible for ARRA/HITECH.

SOA will definitively be part of the mix since it gives the ability create, offer and maintain large numbers of complex EHR Software solutions (SaaS) that have a high level of modularization and interoperability.
 
Further developments toward a complete SOA stack such as offering a Platform as a Service (PaaS) and even the underlying Infrastructure as a Service (IaaS) in the cloud will face more resistance in a domain known for a lot of legacy systems and concerns about privacy and security.

The Object Management Group (OMG) is organizing a conference this summer on the topic of  "SOA in Healthcare: Improving Health through Technology: The role of SOA on the path to meaningful use". It will be interesting to see what healthcare providers, payers, public health organizations and solution providers from both the public and private sector will have to say on this topic.

Wednesday, March 31, 2010

Cloud Computing and Health Care Applications: a change in opinions?

I have designed and implemented Health Care Applications for more than 3 years and I have experienced a dramatic change of opinions toward the use of Cloud Computing for Health IT.

Several years ago, the idea of having on demand resources offered as a service, used to process or store Health Care related data, was out of the question.  The main concerns were the security, privacy and confidentiality of the data; the reliability and ease of use of the underlying systems and platforms.

Health Care solution providers did not hesitate to require a minimum of tens of thousands of dollars of hardware to deploy a minimum configuration for a multi-tier EHR or PHR web based application. In fact, some players were even barely starting to virtualize their platforms.

One of the requirement to comply with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) regulation is that the transmission of patients protected health information (PHI) over open networks must be encrypted.

These issues have been recently addressed and companies offering virtual infrastructure as a service such as Amazon EC2 offer 256 bit AES encryption algorithms for files containing PHI, as well as token or key-based authentication and sophisticated firewall configurations for their virtual servers. Encryption is also available when storing the data on Amazon S3. The access from the internet or EC2 to Amazon S3 is done via encrypted SSL endpoints which ensures that PHI information stays protected. AWS indeed describes several Cloud based Healthcare related applications in their case study, including MedCommons (a health records services provider that give the ability to the end users to store among other medical information CCR and DICOM documents).

Cloud infrastructure providers such as Amazon Web Services (AWS) ensure that their administrators or third-party partners cannot have access to the underlying PHI data. Strong security policies, access consent processes, as well as monitoring and audit capabilities are available to reduce dramatically the risks of  unauthorized access. In addition to this, these providers offer highly available solutions for automated back-ups and disaster recovery which make them more attractive that traditional solutions. Some providers also ensure that the data in question stay within the borders of specific regions, states or countries to comply with regulations in place.

In fact it is very interesting to see these days Health Care becoming a show case of the benefits of Cloud computing. Last month, at the San Francisco Bay Area ACM chapter presentation on cloud computing, I was surprised to see that the first Cloud Application example mentioned was TC3. The numbers were indeed very convincing: When facing with  sudden increase of insurance claims processing (from 1 to 100 millions per day in a very short time), TC3 had the option of a traditional solution consisting of $750K of new hardware and $30K of maintenance and hosting per month, or use an Amazon Web Service Cloud solution for $600 per month. The decision was easy I suppose!

Friday, February 26, 2010

MapReduce an opportunity for Health and BioSciences Applications?

HealthCare and BioScience software products and solutions have embraced Database Management System (DBMS) for their back-end storage and processing for years like most other domains where performance, scalability, security, extensibility, auditing capabilities and maintenance are critical.

In the past few years with alternative or complement technologies such as MapReduce and Hive originally created from the need of extremely high volume web applications such as Google, Facebook or LinkedIn. A lot of people, especially engineers are now wondering if these technologies could be used in HealthCare and BioSciences.

More and more job openings outside the Social Networks or SEO sphere now mention MapReduce and Hadoop in their required or "nice to have" skills, including HealthCare and BioScience companies. In fact, recently at a talk from Bay Area Chapter of the ACM on Hadoop and Hive, even though the talk was quite technical, there were few venture capitalists in the crowd who were checking if this the topic was only hype or would potentially bring big ROI. Healthcare and biotechnologies were definitively in their mind.

Why then would the MapReduce paradigm be a good candidate to provide the "next quantum leap" for HealthCare and BioSciences?

In HealthCare, as more and more users, patients and professionals upload data to applications such as PHRs and EMRs, there is a need to parse, clean and reconcile extremely large amount of data that might be initially stored in log files. Medical observations from patients with chronic diseases such as blood pressure or blood glucose might be good candidates for this, especially when they are uploaded automatically from medical devices. Also the aggregation of data coming from potentially large numbers of sources makes it more suitable to a Map and Reduce processing paradigm than DBMS based data mining tasks.

HealthCare decision makers might be hesitant to use these new technologies as long as they have some concerns related to security, confidentiality and certification to standards such as HL7 (see CCIT and HITSP). However with the overall reforms in progress in HealthCare it will be interesting to see if MapReduce will be part of the technical package for the benefits of not only the patient and care givers, but all healthcare actors including payers and various service providers.

BioSciences (drug discovery, meta-genomics, bioassay activities ...) is also a good candidate for MapReduce. In addition to the fact that BioScience applications deal also with large amount of data (e.g. biological sequences such as DNA, RNA, proteins) a lot of the data is semi-structured data that is semantically rich and most likely best represented as a RDF data model than a Database set of tables (e.g. see "Storage and Retrieval of Large RDF Graph Using Hadoop and MapReduce") . Even though database has made progress to store and process XML, MapReduce is more suitable to very fast processing and aggregation of large amount of key-value elements.

Another element is price and return on investment (ROI), especially for startups is the fact that the implementation of MapReduce over a cloud based infrastructure using an open source framework such as Hadoop and Hive can be an attractive economic proposition for a CTO.

Also both fields can also take advantage of other applications of MapReduce in areas other than hard-core technology but related to brand management, sales and supply chain optimizations used with success in other domains.




Thursday, January 28, 2010

Cloudera & Facebook on Hadoop and Hive



This week I attended a very interesting meeting of the San Francisco Bay Area Chapter of the ACM on the topics of Hadoop and HIVE. I was not the only one interested by MapReduce related projects, since the meeting nicely hosted by LinkedIn at their office of Mountain View, had more than 250 people.






Dr. Amr Awadallah from Cloudera did a very good introduction to Hadoop since a lot of attendees were not very familiar with this java open source version of MapReduce. It is interesting to mention that Desktop product offered by Cloudera is free. Amr explained that Cloudera business model is to offer professional services, training and specific for fees features out of the core of the main product.


Cloudera web site has a lot of good training material on Hadoop and MapReduce. Amr mentioned for example that Hadoop was used at LinkedIn to create and store the recommendations on the fly "People you may know" whereas the profile information is managed by a more traditional RDBMS data store.


They were a couple of questions related to the behavior of Hadoop on top of full virtualization products such as those offered by VMWare. The answer from Amr was first to compare the virtualization of platforms and the parallelism involved in MapReduce/Hadoop. In a way the former architecture goal is to have multiple virtual machines running on the same hardware (e.g. a large mainframe or blade boxes) whereas the later is to have an initial processing and storing job done on multiple cheap commodity two rack units (RU) “pizza” boxes at the same time. So in a way these architectures are completely opposite. Of course it is not fair to try to compare the complete virtualization of complete operating systems such as Windows or Linux and the management of basic map and reduce operations even though they have common characteristics (a file system and some processing capabilities).


However some people do use VMWare images clusters to run Hadoop MapReduce tasks and the question is “is it efficient?”. The answer lies in the way network performance and I/O in general is handled by both the images and the Hadoop scripts.


They was also an interesting question about the fact the Google has several patents on MapReduce this might be an obstacle to the development of open source product on top of hadoop. Amr did not seem to really worry about this.

The second presentation was from Ashish Thusoo from Facebook. Some interesting numbers and statistics about the volume of data processed everyday by Facebook (e.g. already 200GB/day in march 2008). Ashish pointed out that it was more interesting for Facebook to have simple algorithms running on large amount of data than complex data mining algorithms running on small volumes. The benefits were more important and the company was learning much more on their users behaviors and profiles. It was back in 2008 that Facebook started to experiment with MapReduce and Hadoop as an alternative to very expensive existing data mining solutions. One of the issue with Hadoop was the complexity of development and the lack of skill among its teams. This is why Facebook started to look at ways to wrap Hadoop in a more SQL like friendly layer. The result is HIVE which is now open source, although Facebook has some proprietary components, especially on the UI side.

There were some good questions about data skew issues with Hive and Hadoop as well as comparison between HIVE and ASTER. Like Amr did with virtualization and Hadop, Ashish tried to oppose both approaches in simple terms: in a way ASTER is MapReduce applied on top of a RDBMS layer whereas HIVE is a RDBMS layer running on top of MapReduce.

Both presentations:
  • Hadoop: Distributed Data Processing (Amr Awadallah)
  • Facebook’s Petabyte Scale Data Warehouse (Ashish Thusoo)
are available as PDF files on the ACM web site.


Wednesday, December 30, 2009

A Portal Framework for HealthCare


Portals are Web-based applications that give users a centralized point of access for information and applications of relevance. Therefore the portal paradigm is an attractive proposition for health care because it offers a solution to rapidly aggregate heterogeneous applications and services while offering a high level of customization and personalization to the users, patients, care givers and IT personnel.

The integration of healthcare systems and data is a major challenge. Business conditions that typically result in fragmented data stores and limited application functionality are prominent in the healthcare industry.

To meet these challenges, we have created a Portal framework architecture which makes the SOA concept less abstract by offering a concrete service aggregation infrastructure including integration glue like context and code mapping, transformations, master patient index, single sign on and standards based interfaces. The framework facilitates the integration of various applications, so they need not be rewritten to be able to provide services to the portal. Our portal framework is compliant with industry standards such as JSR 168, JSR 286 and WSRP.



In addition to the front-end aggregation layer, a context management layer which uses a subset of the concepts of the HL7 Clinical Context Object Workgroup (CCOW) standard (centralized scheme, robust push-model, simplified context data representation) is used to solve user mapping and facilitate the coordination and synchronization between visual components (portlets in our case). This context management layer connects to the Web services (SOAP or RESTfull) that are exposed by the different systems.







Sessions and Contexts


A portal application like any other web application works with a session. All requests are executed in the context of such session. The session is associated with an authentication context and a lot of other information that is accumulated while processing the requests that are executed with the session. A session can be understood as a temporary storage with a well-defined life cycle. A session is ended either explicitly (log out, connection closing) or by a time-out.



The basic relationship and mechanism between the sessions, identity and context is described as follow: when accessing the web application for the first time there is no session established yet. The user is forced to log in (providing his identity and the credentials to prove the identity). This establishes an authentication context which is kept within a dedicated session. During the requests executed in a session, information is accumulated and processed in the session.





Connecting the Services
Both the portal application (A) and the remote system (B) may have their own identity management capabilities and their own credential storage. In order to integrate A and B we have implemented an extended SAML based token service. The resulting Security Token Service (STS) service includes the token service module as well as an eHF based context management module. This eHF Context Module stores the mapping information between user identifiers from A and the identities of B.








More complex scenarios


In reality, portal applications typically consist of multiple portlets that interact together. Each portlet can themselves aggregate services from various sources. This is where the portlet proxy is very handy because it can shield the presentation layer from back-end service implementation details.

The integration of a new application exposing web services (SOAP or RESTful) is made easier because eHF provides a mediation and routing platform component (IPF) based on Apache Camel that can wrap these services, operate transformations on data and expose them to the portlet proxies. In addition to this the current use of the Security Token Service for authentication can be complemented by the use of a Single Sign On (SSO) mechanism.

For this specific implementation we used Liferay 5.2. as portal server container and a medicine cabinet as healthcare related topic and material.

More details can be found in the paper "A Portal Framework Architecture for Building Healthcare Web-Based Applications" published at the 3rd International Conference on Health Informatics (Health Inf 2010).

Wednesday, November 25, 2009

Context Management, CCOW & HealthCare


What is Context Management?

Context Management is a dynamic computer process that uses 'subjects' of data in one application, to point to data resident in a separate application also containing the same subject.
Context Management allows users to choose a subject once in one application, and have all other applications containing information on that same subject 'tune' to the data they contain, thus obviating the need to redundantly select the subject in the varying applications.
In the healthcare industry where context management is widely used, multiple applications operating "in context" through use of a context manager would allow a user to select a patient (i.e., the subject) in one application and when the user enters the other application, that patient's information is already pre-fetched and presented, obviating the need to re-select the patient in the second application.
In other words it enables clinicians to select a patient's name once in an application and have their screen automatically populate with links to that patient in other applications.
  • Context management is especially used in Patient Information Aggregation Platforms (PIAP) such as Portals.
  • Context Management can be utilized for both CCOW and non-CCOW compliant applications.

What is CCOW?

Context Management is gaining in prominence in healthcare due to the creation of the HL7 Clinical Context Object Workgroup standard committee (CCOW) which has created a standardized protocol enabling applications to function in a 'context aware' state.
The CCOW standard exists to facilitate a more robust, and near "plug-and-play" interoperability across disparate applications.
The Health Level Seven Context Management Standard (CMS) defines a means for the automatic coordination and synchronization of disparate healthcare applications that co-reside on the same clinical desktop.

The clinical context is comprised of a set of clinical context subjects. Each subject represents a real-world entity, such as a particular patient, or concept, such as a specific encounter with a patient.
By sharing context, applications are able to work together to follow the user's thoughts and actions as they interact with a set of applications. These applications are said to be "clinically linked."



The CMS is extremely prescriptive, but as it is only a standard it can only go so far in terms of guiding how applications are actually designed and implemented. Variability among the decisions that application developers make can lead to various amounts of confusion for users of multiple independently-developed CCOW-compliant applications.



  • HL7 CCWO HL7 Context Management Specification acronyms:
    • CMA: Technology and Subject-Independent Component Architecture
    • SDD: Subject Data Definitions
    • UIS: User Interface (Microsoft Windows and Web Browsers)
    • ATM: Component Technology Mapping (ActiveX)
    • WTM: Component Technology Mapping (Web)
CCOW - Context Management Architecture (CMA)

At the most abstract level, the Context Management Architecture (CMA) provides a way for independent applications to share data that describe a common clinical context. However, the CMA must provide solutions for the following problems:
  • What is the general use model for a common context, from the user's perspective?
  • Where does the responsibility for context management reside?
  • How are changes to context data detected by applications?
  • How is context data organized and represented so that it can be uniformly understood by applications?
  • How is context data accessed by applications?
  • How is the meaning of context data consistently interpreted by applications?
  • CMA characteristics:
    • Centralized scheme: The responsibility for managing the common context is centralized in a common facility that is responsible for coordinating the sharing of the context among the applications.
      • The consequence of the service being a single point of failure is offset by the fact that the service and the applications it serves are typically co-resident on the same personal computer.
      • The consequence of the service being a performance bottleneck is offset by the fact that the applications are far more likely to become the performance bottlenecks
    • Robust push-model: This is a push model that deals with synchronization and partial failure issues.
    • Context Data Representation uses Name-value pairs:
      • A set of name-value pairs represent only key summary information about the common context (e.g., just the patient's name and medical record number).
      • The symbolic name for an item describes its meaning.
      • The data types for the items come from a set of simple primitive data types
    • CMA maintains a single authentic copy of the common context for each common context system.
      • Applications can choose to cache context data or they can simply access the authentic copy whenever they need to.
      • Applications can also selectively read or write specific context data name-value pairs.
      • When the context changes, an application is only informed about the change and is not provided with the data that has changed.
      • The application can selectively access this data when it needs to.
    • Context Data Interpretation
      • Standard HL7 CMA subjects and associated context data items includes the core subjects of patient, encounter, observation, user, and certificate, and their respective context data items.
      • Organizations, such as healthcare provider institutions and vendors, may define their own context subjects and data items. These items are in addition to the standard subjects and the standard items defined for the standard subjects.
      • Context item names are case insensitive.
Existing solutions using Context Management
  • Fusionfx from Carefx
    • A Context Manager function is responsible for establishing the links among the applications, which serve as Context Participants.
    • Context Participants synchronize after querying the Context Manager to determine the current context and when to update the context.
    • Context Management also supports Mapping Agents, which map equivalent identifiers when the context is updated so that all participating applications can interoperate.
    • Fusionfx includes JSR-168 based front-end viewers (java portlets) running on IBM Websphere portal solution.
  • Vergence from Sentillion
    • Vergence Wizard (April 2009) a tool configure Single Sign-On (SSO) and Context Management Application Interfaces
      • Fast single sign-on and managing expired passwords
      • Graceful termination of applications at sign-off
      • Support for single sign-on and patient context management use cases
      • Point and click interface to select application controls and associated actions for Windows and Web-based applications
      • Pre-defined actions, which include common navigation, text entry and event monitoring tasks such as Click, Enter Text, Select Menu ..
      • Integrated playback mechanism for testing individual steps
      • Optional display of detailed logs during playback
      • Extensible interface simplifies insertion of custom actions and preserves customizations when regenerating the Bridge
      • Plug-in architecture enables extensibility to incorporate new actions and events and for accommodating new application development technologies
    Sentillon has a patent on context management (US 6,993,556)
Existing solution using CCOW as a participant

  • Centricity Framework from GE
    • Centricity Framework offers GE developers a way to integrate separate GE products, while consolidating sign-on and security to a single point of entry.
    • Developed by the Advanced Technologies Group (ATG), Centricity Framework provides a consistent presentation of login, navigation (menus), and patient banner.
    • Hosted products share context information and can offer cross-product workflows, regardless of their UI technology.
    • Centricity Framework 5.0 (CF 5.0) offers a choice of two client desktop solutions (both require .NET 2.0 on the client desktop):
      • Traditional browser-based Web client (as provided in 4.x)
      • Iris, a .NET-based client solution that does not require Internet Explorer (based on Microsoft smart client technology)
    • Centricity Framework 5.0 supports the Sentillion single sign-on (SSO) solution and Carefx's context manager. The Carefx library is loaded only if CCOW is enabled for the workstation/user.
    • Communications with the Web Framework (WF) server is done via XML-based service calls. Each instance of the CF exposes a certain URL as the handler for all service calls. This URL can be found in the DataURL tag in the ServerInfo.xml file which is located in the WF's main web folder:
      • servlet/IDXWFServlet (for Tomcat-based installs)
      • IDXWFData.asp (for IIS-based installs)
    • CF enable the use of a security plug-in to implement an alternate authentication mechanism in place of the standard Framework username/password check. If a security plug-in is used, the plug-in performs any server-side authentication of users or of authentication tokens generated on the client. The Framework currently provides plug-ins for
      • Kerberos (v4.01)
      • RSA SecurID (v5.0)
      • CCOW userlink (v4.0)
      • CCOW/LDAP integration (v4.03)

Alternatives to CCOW

Worth noting is the Global Session Manager (GSM) developed at Siemens which offers features almost equal to a CCOW environment but is much easier to implement.
The patented system includes:
  • A system that enables (Web) applications to be integrated into process involving concurrent operation of applications
  • The system specifies the rules for conveying URL data and other data between applications
  • The system employs a managing application and services (session manager) to facilitate application session management
  • The system employs by a first (parent) application for supporting concurrent operation with other (children) applications.
  • The system involves an entitlement processor for authorizing user access to the first (parent) application in response to validation of user identification information.
  • The system involves a communication processor for communicating a session initiation request to a managing application to initiate generation of a session identifier particular to a user initiated session.
  • The session manager is used by the managed applications to reference global data that is essential to a workflow. Such global data includes:
    • user identification information
    • a shared key used for the encryption of URL data
    • a common URL to be used for handling logoff and logon function.
  • The session manager is regularly notified of activities from the applications to prevent an inactivity timeout while a user is active in another concurrent application.
  • The session manager employs a system protocol for passing session context information between applications via URL query or form data.
    • The session context information comprises:
      • a session identifier (used by the managed applications to identify a user initiated session in communicating with manager)
      • a hash value (used by the managed applications to validate that a received URL has not been corrupted)
      • application specific data (can be encrypted)
  • The session manager uses a unique session identifier (SID) for each new session (to protect against corruption and replay of a URL)
  • In addition to this, to avoid redirection, the parent application needs to generate a URL link with an embedded hash value from the domain, port and filepathname of the URL (e.g. using RSA MD5)
  • The communication protocol between the client browser and the applications is HTTP
  • The communication protocol between the applications and the Session manager is TCP/IP.
Siemens has a patent on this technology (US 7,334,031).