Feeds:
Posts
Comments

Archive for the ‘BPM’ Category

So here’s a BPM capabilities matrix from the not-too-distant past. I won’t say which two BPM platforms were evaluated for the below matrix, much less the client and the effort, but I’m sure many have seen this format before and it’s intended as just a/one starting, jumping off point.

As I work on this BPM/Case Management series in the coming weeks and months (yes, it will take that long) I hope to blow this out, draft multiple use cases for structured and unstructured process both and, ultimately, draft a white paper for the products examined that goes way deeper under the hood than anything Gorrester or Fartner have ever done.

I’m soliciting any and all input to this artifact and the use cases both so I make sure I properly address the platforms’ technologies for how they perform any given functionality, if that’s good or bad, and why. Feel free to comment here or ping me on Twitter with your input.

I have a LOT of resources, references that I’m going to go through, use for this plus, of course, running VMs for all the platforms that will be examined (interrogated? <g>). I want to make sure I get it as comprehensive and unbiased as possible.

Cheers, Pat

Modeling
        Graphical User Interface
        Icons to represent Steps
        Connections between steps
        Drag & Drop
        Reusable Submaps
        Process and Property Inheritance
        Model Management
        Fine grained security
        Public & Private Inbox/Workbasket & Worklist
        Split and Merge steps
        Spawn new workflows
        Deadlines
        Timers
        Notification
        Interfaces – Web Service, Human, Queue
        Import standard Modeling Languages – BPMN, XPDL
        Visio Interface
Simulation
        Parameterization
        Resource Allocation
        Reporting
        Work shift creation and assignment
        Intelligent metadata repository
        Intelligent Recommendations
Automation
        Rules Engine communication
        Dynamic workflow customizations
        Decision making
        Intelligent Routing
        Skill based routing
        Content based routing
        Document Management Functionality
        Load balancing work
        Event Driven
        Scheduling
        Intelligent workflow activity traversals
        Integration with external systems
Monitoring
        DashBoard
        Ad-hoc reporting
        Scheduled reporting

So what’s the moral of your story Mr. Peabody?

  • This is gonna be bigger than a breadbox.

What’s next?

Advertisements

Read Full Post »

64KWhen examining business and technical trends, particularly with today’s social media hyperbolic echo chambers’ constant blare, it is instructional to look at history and understand how we got to where we are.

In the case of IBM Case Manager (ICM) I first noted, began paying attention to it in late 2010, reaching out to IBM ECM Education and saying that as soon as there was a class for it I was interested. (Sidebar to the technologists: Always try to stay in the vanguard, bleeding edge pays. I’ll explain more later). So, in the fall of 2011 I trotted off and took IBM Advanced Case Manager (ACM) Installation concurrent with attending FileNet P8 5.0 Installation that particular week. I took the latter class with a then-client present in the room and will hit them up in the #SoC series later.

Back then it was Advanced Case Manager and dependent upon the vendor the ‘A’ stood for something else – ‘Advanced,’  (IBM), ‘Adaptive’ (ISIS Papryus, more on @maxjpucher later. Remember the history part on this series throughout) or even a different letter – Dynamic Case Management (Pega). We’ll come to each of those in turn. On the Big Blue side of things suffice to say that within the course of the past four, five years everything has come to have an “I” in front of it. Go figure.

Flash forward to current times and everybody pretty much generically just says “Case Management” and leaves it at that. 😉

ICM 5 InstallationTurning back to ICM c.2010/2011 the installation chronology for installing, configuring and deploying the apps (plural) was thus. For ICM it’s important to remember that in FileNet P8 5.0 Process Engine (one of the three app servers in the base “triad” of P8) was newly made a Java (not J2EE) app and that Content Engine and Workplace/XT were JEE apps on middleware, as was/is ICM.

ICM 5_2And, moving forward in time five years, PE has now been folded in with CE to be “CPE,” Content Process Engine, on middleware though the table structures in the underlying DB(s) are still there. More on that later as we dive down as that aspect’s importance, relevance is examined vis-a-vis the other platforms we’re going to look at. However, we still have three .EARs and one .WAR deployed out on WebSphere. By the way, the discourses on ICM are pretty much going to look at it from a “blue-washed” perspective so you can pretty much assume WebsFear throughout this series whenever I talk about ICM.

Now, though users and most people who aren’t on the bare metal probably don’t care that much about a bottom-up, systems architecture view on any of this I give it to you, point it out because whether they know, care, or acknowledge it or not a platform’s legacy and architecture very, very much determine how it does things the way it does. How the technology effects the business functionality, which things it does well, how and why. All of these platforms and their technology have idiosyncrasies and what I like to call “tips, tricks and traps.” All have been around long enough now to have an empirical body of knowledge and history of what constitutes best practices for that given platform and its underlying technology.

For this opening salvo I just want to point out that, in the case of ICM, it is an application (dare I say “framework?”) that sits on TOP of IBM Case Foundation (nee IBM FileNet P8) and needs ICF in order to do its thing. Other than a couple of kinks I’ll point out, the same is not true of the other platforms we’re going to examine, dive down into. Each has their pro’s and con’s.

So what’s the moral of your story Mr. Peabody?

  • The smoke and mirrors truly are there to distract, keep you from looking behind the curtains.
  • ““Those who cannot remember the past are condemned to repeat it.”

What’s next?

Read Full Post »

Genesis – Case Management #BPM

P8 IM and CM suitesIn the beginning, everything was closed, everything was proprietary. Hardware, software. Client, server. There were IWSs, CWSs, FDOS servers and big, huge honking jukeboxes with big, huge honking platters, though they didn’t hold very much by today’s standards. And the peanut gallery, at the behest of the purveyors said “It was good” and for the most part, at the time, it was. Other than the bloody expensive part.

Then came PCs and server-side opened up to AIX first, then slowly, but surely, the other UNIX versions (but not Linux yet) and WorkFlo Business Systems and the purveyors said “Let there be lots and lots of 3.5″ installation diskettes.” And so we had WorkFlo Script and a runtime interpreter and Windows for Workgroups 3.11 was the shite. And the FileNuts were one of the big three and the term “workflow” was avante garde. Images got scanned, distributor queues were filled, documents got routed and the peanut gallery said “It is good.”

In relatively short order we had WorkFlo Controls for Visual Basic and, for a brief period of time, WorkFlo Power Libraries as well. Under the hood all was WorkFlo Application Libraries (C, not C++) and we were procedural beings. Then came Active XPanagon, IDM Desktop and the purveyors and peanut gallery went “huzzah.” Concurrently we had Visual WorkFlo 1.0, then 2.0 and the workflow performer (execution) was abstracted out from the workflow objects, and VW_db and like prefixes were all about us. Elsewhere at the same time, acquisitions were occurring and one in the great Northwest in particular forebode the future.

Not all was good though, DCOM and JiGlue rained terror from the servers on the programming peasants throughout the NT domain.

Soon thereafter there was the promise of light with praise from the choir and there were Acenza and BrightSpire, but no one knew what they were or why, and their lights went out as quickly as they were lit. Then came “P8” and the peanut gallery said “What?” and the purveyors said “ECM is content, process and connectivity and BPM is automation, integration and continuous process improvement” ten years earlier than anyone else getting a bug up their arse about “continuous process improvement,” “analytics,” “practice” or BPM vs. BPMS. And we had CE, PE and AE and the peanut gallery said “better.”

Then came the great shifts. CE, the heart and soul of our world, went from three NT services to J2EE. And the middleware people became bigger pains in the arse than the databases ones because, well, middleware talent was and still is in scarce supply to this day. Thus our exposure and migration from client/server to n-tier. Along the way Workplace (AE) got cleaned up to Workplace XT and PE and its Visual WorkFlo forebears were slowly left by the wayside. True, there were improvements, but not to the same extent as the rest of the BPM world.

Today there are the big three – IBM Case Foundation, IBM Content Navigator and IBM Case Manager. That last sits on top of the first two, but just for shits and grins here’s a definition of case management from Business Process Framework (ICM’s older, distaff cousin, a tale in itself).
 
“BPF as a case management solution provides a framework for the development and deployment of business solutions based on the integration of content and workflow management. The central concept behind a case management solution is the Case. CE provides content management services, the PE provides business process services; the Case file provides integration between the two.”
 

Sound familiar? It was written in 2005.

So what’s the moral of your story Mr. Peabody?

  • Old code never dies, it just gets abstracted.
  • Purveyors like to refry leftovers.
  • The more things change…

What’s next? I’ll tell you what’s next. We’re going down the rabbit hole.

Read Full Post »

“A COE is “best practices and lessons learned standardized and codified into a framework and repeatable methodology.” – Me

COE Operating Model

Third time in seven days at two conferences that I attended a breakout session on BPM Centers of Excellence. COEs are a subject of interest to me in that I’ve done it four times in the past eight years and it is (should be) about sharing the wealth (knowledge, assets). First time was for a large Southeast financial as an LOB solutions architect protesting the fact that the COE wasn’t getting me the things I needed. Second time for a large bank in the Northeast, the infra guys loved me because, as the lead architect, I kept bad apps from getting into production (the #1 goal in my opinion). Third go ’round I helped spin up the community forum for a large insurer in the Northeast, submitting an operational model for their COE. Latest go ’round is with, once again, a large financial. Supposedly there’s a COE out there somewhere, but it’s pretty ethereal, finding standards, assets, assistance; we’ll talk ‘visibity’ later.


Anywayz, back to AIG’s “AIG Leverages Their Center of Excellence for Strategic Advantage” session a week ago yesterday at PegaWORLD 2014. They use Forrester’s definition of a COE

  • delivery assurance (code reviews, design reviews)
  • resources and skills
  • technology and architecture (common platform)
  • QA
  • infrastructure

and implemented their COE because they had a lot of everything – apps, support groups, partners, platforms. Though they’ve had Pega for a while they implemented their COE with the mission of thought leadership, to accelerate projects, mitigate risks and function as vendor liaison. In-flight for about 1.5 years, they’re running a centralized model and coming out of the gate were, are working on managing resources, frameworks, integrations with centralized repositories on the way down the road. Standard benefits realized on efficiencies, standards, standardized estimating process, re-use, having specialist, driving agile use and achieving resourcing flexibility. Like a lot of people, they’re still working on Agile and did acknowledge they’re still Waterfall within the org. Another good thing they’ve done, IMHO, is implement a community forum (more on that later).

Some other items of note:

  • They use Pega cloud for their sandbox, dev environments and provision quickly to new LOB development teams.
  • Enterprise Architecture and an EA in the role of COE leadership and a member of the core COE team is present; that also is important.
  • Like a lot of people in the Pega world they’ve built a core framework on top of the Pega base.

On the subject of “lessons learned,” appreciating their candor, they did pony up and acknowledge that, like a lot of people that funding at the corporate or project level is an issue. About 40% of their funding comes from the business units (resource allocation). Other lessons learned:

  1. Pega expertise to build guardrail-compliant BPM apps
  2. Alignment with Enterprise Architecture
  3. Central repository and structure
  4. Management support (always key)
  5. They did good on documentation but fell down on putting it out there and visible on a central repository.

The senior information officer hedged when I asked the pointed question about what did they do if any of the dev teams nuked their sandbox, what did they do with a “I call Pega,” but I myself in the past have just reverted the VM (on premise or otherwise) to a previous snapshot and said “Hope you have your stuff backed up.”


 

Later Monday afternoon there was a panel session with Cisco, JPMC, BNY Mellon and Jabil Circuits talking about their Pega BPM COEs. Five to ten minutes per individual with a short deck each and backgrounders on their COEs, my main grouse about this was there wasn’t enough time left at the end to ask questions. Interestingly enough, just as with the morning session, there were other people from other orgs’ COEs in the audience and I wish a more extensive dialogue could have occurred. If you’re interested, Pega has spun up on on-going forum and as soon as I learn what the true e-mail address for requesting participation is (it’s not PegaWORLD2014COE@pega.com) I’ll pass it on.


 

Flash forward a week. Last, but not least, Scott Simmons of IBM talking about COEs as well (does Big Blue still call them competency centers?). Scott’s deck demonstrated he’s been around the block a time or two and I was mostly on board with the practical advice he was giving. I did disagree with him on his three most important features of a COE. He stated it was re-usable assets, I maintain it’s governance. Why? Me, myself and I, applications/solutions or systems/enterprise architecture, in the trenches, my number one goal is to keep a bad app from getting into prod.

I’ll look at Scott’s deck later, but interestingly enough he was one of several individuals with some link to Colorado who are here at the conference.


 

So what’s the moral of your story Mr. Peabody?

  1. It’s about expertise, period, plain and simple.
  2. For me, it’s all about the governance piece. If you don’t have the empowerment, the authorization, the… “teeth,” there’s not much point.

What’s next?

 

 

Read Full Post »

Arriving Saturday afternoon, I meandered over to the convention center that evening to pick up my badge, start examining the full conference guide, read breakout sessions’ abstracts and noted, like most conferences, that Saturday was partner sales day. Different vibe than when the users, attendees show up, but I digress. There was a full day of sessions on Sunday which you can see here, but this year I chose to pony up and spend the day in classes nosing around in virtual machines. First up, Pega 7 Overview 

This time last year in Orlando we were getting a preview of PegaPRPC 7 and the new Designer Studio. Pega 7 has the goals of:

  1. Works the Way the Business Thinks
  2. Simplifies and Accelerates
  3. mmm… I don’t remember. So much for the theory that doing, saying things in “3’s” will stick in your head. 😉

In any event the idea, or emphasis, is that Pega 7 is for building business rules for business users. As part of the introduction the deliberately provoking statement “Business process don’t change, they’re static” was made, but then the punch line elaboration was that it’s actually the business rules that are changing constantly. Back to the lab and Pega 7, Designer Studio is built on HTML 5. Beyond the initial layout when adding screen elements, moving things around is as simple as left-clicking and dragging them around to where you want them. Harnesses, flow actions and such are still there under the hood, but they’re not up in your face as in the past and Designer Studio puts a nice facade over that.

For those back in the PRPC 6.x world they do have a migration tool. With some advance planning it should be a fairly, if not fully, automated process. Patches are now fully inclusive as well. (e.g. – PegaPRPC 7.1.4 will include everything in 7.1.1-7.1.3).


 

On to Case Designer for Business Architects. I’m a technical guy at core, but when wearing my solutions or enterprise architect hat at a client site I like to see, know what the BAs are up to as, in recent years, multiple BPM platforms and vendors have been moving towards getting knowledge workers and business architects into the process designer to design the process flows.

A few definitions off deck slides in the class:

  1. “A case (n) is a business transaction to solve.” They have status and at least one process, have actors, tasks, data and history.
  2. “Process management (v): adapting to changing business conditions.”
  3. Case management (n): is “a holistic view of a business transaction” and a case type are the tasks needed to automate a business transaction.
  4. Steps are an action, a step in a stage.

So now we know what case management looks like from this vendor’s point of view. Diving down deeper we have the idea of “stages,” which would be key milestones, markers, phases that the business process must go through. Guardrails are still there  and the “rule of 7” still applies. That is, for the first-level grouping of stages in a case keep it to 7 +/- 2 and if you have more than that chances are you haven’t decomposed enough. When defining stages pay attention to the transfer of authority or a significant change in status.

Some quick best practices for defining case steps:

  1. use an iterative approach
  2. ignore details of each step
  3. set the expected order of tasks
  4. steps are universally understood
  5. limit decomposition levels
  6. easily communicated

Standard schtuff. In Pega Case Management every stage needs at least one step, the “Default step,” and the steps can be configured as single step assignments, multi-step or a case. In other words, case management is just a bunch of multi-step processes (a split join in BPMN parlance). SLAs and tolerance intervals are still there, of course, and can be set at the case level and the assignment level both.

A couple of other items of note are that PRPC can do federated case management, but the documentation says only between versions 6.3 <> 7.x. There was some talk of people in the UK doing it with versions 6.1 and 6.2, but that the exercise wasn’t trivial.

Also of note is that the instructor was accessing Designer Studio on a Mac with Google Chrome. Internet Exploder 11 isn’t yet compatible with PRPC 7.1.4, but will be. Conversely, Pega 6 doesn’t work with Chrome. The goal, direction is for there to be no asterisks on browsers and versions.

That’s what stuck in my head, I’ll doink around more on Pega Academy later and get lower to the ground on this stuff.

Props to Pega for putting the PRPC server in a small Ubuntu, Postgres VM and accessing the PRServlet URL from the host OS browser. Those puppies were small and fast (less than 7GB).

Read Full Post »

On Kool-Aid and iocane powder

“You fool! You fell victim to one of the classic blunders – The most famous of which is “never get involved in a land war in Asia” – but only slightly less well-known is this: “Never go in against a Sicilian when death is on the line”! Ha ha ha ha ha ha ha! Ha ha ha ha ha ha ha! Ha ha ha…”

Image

I’ll keep it short and sweet. Here’s the thing, if your shit – operating system, database, ECM platform, BPM platform, ACM platform – was the shit, did it the most elegantly, the best way, for all situations, for all functions, for all clients, well then, everybody would be using it, wouldn’t they? Shouldn’t they? Oooor not…

If your shit truly is the shit, how come nobody else gets it? What is the rest of the world missing that you’re not? How come you’re the only one who gets it and nobody else does? And why isn’t the rest of the world listening to you? Things that make you go hmmm…

So what’s the moral of your story Mr. Peabody?

Maybe, just maybe, your shit isn’t the shit.

Sometimes I like lemonade. Sometimes I like root beer.

#ThatIsAll

Read Full Post »

“One architecture to rule them all. One architecture to find them. One architecture to bring them all and in the machine room bind them.”

Image

This post is not so much a redux of my very first blog post, but rather a continuation of that. In fact, this post will be the first of an on-going series regarding reference architectures, architectural strategies and long-term roadmaps within the context of ECM and BPM both. The series will drop from 20,000′ to 200′ as I descend through the cloud on a large technical architecture recommendation and roadmap for a client.  Without belaboring the efforts delineated in that first post and their respective outcomes, this one will take everything I knew up to that point, since that point and beyond as well. I’m going to stretch on this one.

Everything I’ve observed, learned, done in the past twenty years in this space and project it (yes, believe it, you heard, read it here first)  three years forward. Agreed, that’s a long time in today’s world. The issue for this “technical recommendation” though will not be technology, it will be execution. The roadmap will be three years in duration because that will be the degree of effort required to effect it. And this time I have the juice, the influence and the appropriate ears to see it through.

Beyond the initial table of contents screen shot above, this will actually be one of three artifacts I produce for this client, the other one being a Lunch ‘n Learn curriculum I will draft delineating the “how” of everything in the ref arch as well as a third on some of the tools I will be using to blow out portions of the technical recommendation (e.g. – the ontology and the taxonomy).

In short, this thing is going to be a freaking masterpiece. It hasn’t been explicitly mandated, but it has been tacitly requested within the scope of my effort and participation with this client. They’re big, huge in fact; and so will this be as well. As for myself it will be the progenitor to multiple case studies and, longer term, one service offering and one commercial, shrink-wrapped product I’ve long dreamt of, want to build and sell.

So get ready kids. Twenty years in the making, we’re going to get down in the weeds on this and go way beyond the ‘101’ level to infinity and beyond. Should be fun, hope you enjoy my trials, tribulations, angst and tenacity all.

Oh yeah, I’m going to write all three in three months. I’ve got other schtuff to do as well.

So what’s the moral of your story Mr. Peabody?

Never stop trying, never stop learning, always try to do better than the last time.

Read Full Post »

Older Posts »