Posts Tagged ‘BPM’

So here’s a BPM capabilities matrix from the not-too-distant past. I won’t say which two BPM platforms were evaluated for the below matrix, much less the client and the effort, but I’m sure many have seen this format before and it’s intended as just a/one starting, jumping off point.

As I work on this BPM/Case Management series in the coming weeks and months (yes, it will take that long) I hope to blow this out, draft multiple use cases for structured and unstructured process both and, ultimately, draft a white paper for the products examined that goes way deeper under the hood than anything Gorrester or Fartner have ever done.

I’m soliciting any and all input to this artifact and the use cases both so I make sure I properly address the platforms’ technologies for how they perform any given functionality, if that’s good or bad, and why. Feel free to comment here or ping me on Twitter with your input.

I have a LOT of resources, references that I’m going to go through, use for this plus, of course, running VMs for all the platforms that will be examined (interrogated? <g>). I want to make sure I get it as comprehensive and unbiased as possible.

Cheers, Pat

        Graphical User Interface
        Icons to represent Steps
        Connections between steps
        Drag & Drop
        Reusable Submaps
        Process and Property Inheritance
        Model Management
        Fine grained security
        Public & Private Inbox/Workbasket & Worklist
        Split and Merge steps
        Spawn new workflows
        Interfaces – Web Service, Human, Queue
        Import standard Modeling Languages – BPMN, XPDL
        Visio Interface
        Resource Allocation
        Work shift creation and assignment
        Intelligent metadata repository
        Intelligent Recommendations
        Rules Engine communication
        Dynamic workflow customizations
        Decision making
        Intelligent Routing
        Skill based routing
        Content based routing
        Document Management Functionality
        Load balancing work
        Event Driven
        Intelligent workflow activity traversals
        Integration with external systems
        Ad-hoc reporting
        Scheduled reporting

So what’s the moral of your story Mr. Peabody?

  • This is gonna be bigger than a breadbox.

What’s next?


Read Full Post »

64KWhen examining business and technical trends, particularly with today’s social media hyperbolic echo chambers’ constant blare, it is instructional to look at history and understand how we got to where we are.

In the case of IBM Case Manager (ICM) I first noted, began paying attention to it in late 2010, reaching out to IBM ECM Education and saying that as soon as there was a class for it I was interested. (Sidebar to the technologists: Always try to stay in the vanguard, bleeding edge pays. I’ll explain more later). So, in the fall of 2011 I trotted off and took IBM Advanced Case Manager (ACM) Installation concurrent with attending FileNet P8 5.0 Installation that particular week. I took the latter class with a then-client present in the room and will hit them up in the #SoC series later.

Back then it was Advanced Case Manager and dependent upon the vendor the ‘A’ stood for something else – ‘Advanced,’  (IBM), ‘Adaptive’ (ISIS Papryus, more on @maxjpucher later. Remember the history part on this series throughout) or even a different letter – Dynamic Case Management (Pega). We’ll come to each of those in turn. On the Big Blue side of things suffice to say that within the course of the past four, five years everything has come to have an “I” in front of it. Go figure.

Flash forward to current times and everybody pretty much generically just says “Case Management” and leaves it at that. 😉

ICM 5 InstallationTurning back to ICM c.2010/2011 the installation chronology for installing, configuring and deploying the apps (plural) was thus. For ICM it’s important to remember that in FileNet P8 5.0 Process Engine (one of the three app servers in the base “triad” of P8) was newly made a Java (not J2EE) app and that Content Engine and Workplace/XT were JEE apps on middleware, as was/is ICM.

ICM 5_2And, moving forward in time five years, PE has now been folded in with CE to be “CPE,” Content Process Engine, on middleware though the table structures in the underlying DB(s) are still there. More on that later as we dive down as that aspect’s importance, relevance is examined vis-a-vis the other platforms we’re going to look at. However, we still have three .EARs and one .WAR deployed out on WebSphere. By the way, the discourses on ICM are pretty much going to look at it from a “blue-washed” perspective so you can pretty much assume WebsFear throughout this series whenever I talk about ICM.

Now, though users and most people who aren’t on the bare metal probably don’t care that much about a bottom-up, systems architecture view on any of this I give it to you, point it out because whether they know, care, or acknowledge it or not a platform’s legacy and architecture very, very much determine how it does things the way it does. How the technology effects the business functionality, which things it does well, how and why. All of these platforms and their technology have idiosyncrasies and what I like to call “tips, tricks and traps.” All have been around long enough now to have an empirical body of knowledge and history of what constitutes best practices for that given platform and its underlying technology.

For this opening salvo I just want to point out that, in the case of ICM, it is an application (dare I say “framework?”) that sits on TOP of IBM Case Foundation (nee IBM FileNet P8) and needs ICF in order to do its thing. Other than a couple of kinks I’ll point out, the same is not true of the other platforms we’re going to examine, dive down into. Each has their pro’s and con’s.

So what’s the moral of your story Mr. Peabody?

  • The smoke and mirrors truly are there to distract, keep you from looking behind the curtains.
  • ““Those who cannot remember the past are condemned to repeat it.”

What’s next?

Read Full Post »

Genesis – Case Management #BPM

P8 IM and CM suitesIn the beginning, everything was closed, everything was proprietary. Hardware, software. Client, server. There were IWSs, CWSs, FDOS servers and big, huge honking jukeboxes with big, huge honking platters, though they didn’t hold very much by today’s standards. And the peanut gallery, at the behest of the purveyors said “It was good” and for the most part, at the time, it was. Other than the bloody expensive part.

Then came PCs and server-side opened up to AIX first, then slowly, but surely, the other UNIX versions (but not Linux yet) and WorkFlo Business Systems and the purveyors said “Let there be lots and lots of 3.5″ installation diskettes.” And so we had WorkFlo Script and a runtime interpreter and Windows for Workgroups 3.11 was the shite. And the FileNuts were one of the big three and the term “workflow” was avante garde. Images got scanned, distributor queues were filled, documents got routed and the peanut gallery said “It is good.”

In relatively short order we had WorkFlo Controls for Visual Basic and, for a brief period of time, WorkFlo Power Libraries as well. Under the hood all was WorkFlo Application Libraries (C, not C++) and we were procedural beings. Then came Active XPanagon, IDM Desktop and the purveyors and peanut gallery went “huzzah.” Concurrently we had Visual WorkFlo 1.0, then 2.0 and the workflow performer (execution) was abstracted out from the workflow objects, and VW_db and like prefixes were all about us. Elsewhere at the same time, acquisitions were occurring and one in the great Northwest in particular forebode the future.

Not all was good though, DCOM and JiGlue rained terror from the servers on the programming peasants throughout the NT domain.

Soon thereafter there was the promise of light with praise from the choir and there were Acenza and BrightSpire, but no one knew what they were or why, and their lights went out as quickly as they were lit. Then came “P8” and the peanut gallery said “What?” and the purveyors said “ECM is content, process and connectivity and BPM is automation, integration and continuous process improvement” ten years earlier than anyone else getting a bug up their arse about “continuous process improvement,” “analytics,” “practice” or BPM vs. BPMS. And we had CE, PE and AE and the peanut gallery said “better.”

Then came the great shifts. CE, the heart and soul of our world, went from three NT services to J2EE. And the middleware people became bigger pains in the arse than the databases ones because, well, middleware talent was and still is in scarce supply to this day. Thus our exposure and migration from client/server to n-tier. Along the way Workplace (AE) got cleaned up to Workplace XT and PE and its Visual WorkFlo forebears were slowly left by the wayside. True, there were improvements, but not to the same extent as the rest of the BPM world.

Today there are the big three – IBM Case Foundation, IBM Content Navigator and IBM Case Manager. That last sits on top of the first two, but just for shits and grins here’s a definition of case management from Business Process Framework (ICM’s older, distaff cousin, a tale in itself).
“BPF as a case management solution provides a framework for the development and deployment of business solutions based on the integration of content and workflow management. The central concept behind a case management solution is the Case. CE provides content management services, the PE provides business process services; the Case file provides integration between the two.”

Sound familiar? It was written in 2005.

So what’s the moral of your story Mr. Peabody?

  • Old code never dies, it just gets abstracted.
  • Purveyors like to refry leftovers.
  • The more things change…

What’s next? I’ll tell you what’s next. We’re going down the rabbit hole.

Read Full Post »


True story, I once did a gig for a very large org where not only was there a large process initiative, but said same org was doing Six Sigma and CMM at the same time. And they had a PMO. Now, the preceding made my eyes blanch just writing it, but the truth of the matter is the project itself went pretty good actually, mainly because of the team on the ground at the time. I’ll talk “dream teams” later, suffice to say they’re rare.

Anywayz, I normally don’t do this, but there’s an interesting conversation going on today over on BPM.com about “Where Do You Think BPM Needs to Be Improved?” Go read it, there’s some good thoughts. As is often the case, the initial respondent’s take led others down a similar path, addressing some of the same issues. In this instance BPM having a bad rep, bad street cred on the business block (and IT too).

Here’s my take to elaborate just a bit because this is one that hits close to home. Back in the day, before the hyperbole and ADHD of #social and the frenetics of the ‘Net, there were trade mags (rags?) and glossies. I got into an exchange with one publication’s editor back in the late nineties when they took umbrage at my candor about one of the puff pieces they wrote, some one, some project I knew first hand. Those folks and the piece both had left a bad taste in my mouth because, well, I’d been there. It was one of those instances where I read the article, thought in my head “Wow, that looks like that was a really cool project.” Then I called a cohort (fellow victim?) that was there with me in the trenches and inquired “Did you read the article? That wasn’t the place I was at, were you? That wasn’t the project I was on, were you?,” then we have a good laugh.

Flash forward to current times’ hyperbolics, going to conferences and attending the awards dinner on the last night, seeing companies, projects, people walking up to the front of the room, getting awards with nice chunks of etched crystal on nice blocks of wood, returning to their seat with applause  and huzzahs from their tablemates, subsequently touting it on their web sites and I still have a good chuckle or someone leans in over beers later and says “lemme tell you about that one.” Some of it may be grape juice, but having BTDT I know a lot of it is simple honesty, not hype.

We should talk about that more. Owning up to what went well, what didn’t and why, then try to do something about it on the next go ’round instead of glossing over the failures of the past. They’re called “best practices and lessons learned.” Lots of people, particularly newbies, ask about them, are interested in hearing the truth. I see, hear it constantly. They want, need to know. We should tell ’em. Honestly. Tell ’em about those “Come to Jesus” meetings where the last gasp, exhale is made before expiration and then, hopefully, expiation.

What’s the moral of your story Mr. Peabody?

  • You only have control over yourself. You can only do the best you can, exercise the Golden Rule. The rest you leave to karma. It will out in the end, have faith.
  • Don’t believe everything you read on the ‘Net. Lies, damn lies, statistics, blogs and the ‘Net.
  • Look for the good ones, seek them out. They’re there. Wheat and chaff and all that.

What’s next?

Read Full Post »

“A COE is “best practices and lessons learned standardized and codified into a framework and repeatable methodology.” – Me

COE Operating Model

Third time in seven days at two conferences that I attended a breakout session on BPM Centers of Excellence. COEs are a subject of interest to me in that I’ve done it four times in the past eight years and it is (should be) about sharing the wealth (knowledge, assets). First time was for a large Southeast financial as an LOB solutions architect protesting the fact that the COE wasn’t getting me the things I needed. Second time for a large bank in the Northeast, the infra guys loved me because, as the lead architect, I kept bad apps from getting into production (the #1 goal in my opinion). Third go ’round I helped spin up the community forum for a large insurer in the Northeast, submitting an operational model for their COE. Latest go ’round is with, once again, a large financial. Supposedly there’s a COE out there somewhere, but it’s pretty ethereal, finding standards, assets, assistance; we’ll talk ‘visibity’ later.

Anywayz, back to AIG’s “AIG Leverages Their Center of Excellence for Strategic Advantage” session a week ago yesterday at PegaWORLD 2014. They use Forrester’s definition of a COE

  • delivery assurance (code reviews, design reviews)
  • resources and skills
  • technology and architecture (common platform)
  • QA
  • infrastructure

and implemented their COE because they had a lot of everything – apps, support groups, partners, platforms. Though they’ve had Pega for a while they implemented their COE with the mission of thought leadership, to accelerate projects, mitigate risks and function as vendor liaison. In-flight for about 1.5 years, they’re running a centralized model and coming out of the gate were, are working on managing resources, frameworks, integrations with centralized repositories on the way down the road. Standard benefits realized on efficiencies, standards, standardized estimating process, re-use, having specialist, driving agile use and achieving resourcing flexibility. Like a lot of people, they’re still working on Agile and did acknowledge they’re still Waterfall within the org. Another good thing they’ve done, IMHO, is implement a community forum (more on that later).

Some other items of note:

  • They use Pega cloud for their sandbox, dev environments and provision quickly to new LOB development teams.
  • Enterprise Architecture and an EA in the role of COE leadership and a member of the core COE team is present; that also is important.
  • Like a lot of people in the Pega world they’ve built a core framework on top of the Pega base.

On the subject of “lessons learned,” appreciating their candor, they did pony up and acknowledge that, like a lot of people that funding at the corporate or project level is an issue. About 40% of their funding comes from the business units (resource allocation). Other lessons learned:

  1. Pega expertise to build guardrail-compliant BPM apps
  2. Alignment with Enterprise Architecture
  3. Central repository and structure
  4. Management support (always key)
  5. They did good on documentation but fell down on putting it out there and visible on a central repository.

The senior information officer hedged when I asked the pointed question about what did they do if any of the dev teams nuked their sandbox, what did they do with a “I call Pega,” but I myself in the past have just reverted the VM (on premise or otherwise) to a previous snapshot and said “Hope you have your stuff backed up.”


Later Monday afternoon there was a panel session with Cisco, JPMC, BNY Mellon and Jabil Circuits talking about their Pega BPM COEs. Five to ten minutes per individual with a short deck each and backgrounders on their COEs, my main grouse about this was there wasn’t enough time left at the end to ask questions. Interestingly enough, just as with the morning session, there were other people from other orgs’ COEs in the audience and I wish a more extensive dialogue could have occurred. If you’re interested, Pega has spun up on on-going forum and as soon as I learn what the true e-mail address for requesting participation is (it’s not PegaWORLD2014COE@pega.com) I’ll pass it on.


Flash forward a week. Last, but not least, Scott Simmons of IBM talking about COEs as well (does Big Blue still call them competency centers?). Scott’s deck demonstrated he’s been around the block a time or two and I was mostly on board with the practical advice he was giving. I did disagree with him on his three most important features of a COE. He stated it was re-usable assets, I maintain it’s governance. Why? Me, myself and I, applications/solutions or systems/enterprise architecture, in the trenches, my number one goal is to keep a bad app from getting into prod.

I’ll look at Scott’s deck later, but interestingly enough he was one of several individuals with some link to Colorado who are here at the conference.


So what’s the moral of your story Mr. Peabody?

  1. It’s about expertise, period, plain and simple.
  2. For me, it’s all about the governance piece. If you don’t have the empowerment, the authorization, the… “teeth,” there’s not much point.

What’s next?



Read Full Post »

On Kool-Aid and iocane powder

“You fool! You fell victim to one of the classic blunders – The most famous of which is “never get involved in a land war in Asia” – but only slightly less well-known is this: “Never go in against a Sicilian when death is on the line”! Ha ha ha ha ha ha ha! Ha ha ha ha ha ha ha! Ha ha ha…”


I’ll keep it short and sweet. Here’s the thing, if your shit – operating system, database, ECM platform, BPM platform, ACM platform – was the shit, did it the most elegantly, the best way, for all situations, for all functions, for all clients, well then, everybody would be using it, wouldn’t they? Shouldn’t they? Oooor not…

If your shit truly is the shit, how come nobody else gets it? What is the rest of the world missing that you’re not? How come you’re the only one who gets it and nobody else does? And why isn’t the rest of the world listening to you? Things that make you go hmmm…

So what’s the moral of your story Mr. Peabody?

Maybe, just maybe, your shit isn’t the shit.

Sometimes I like lemonade. Sometimes I like root beer.


Read Full Post »

“One architecture to rule them all. One architecture to find them. One architecture to bring them all and in the machine room bind them.”


This post is not so much a redux of my very first blog post, but rather a continuation of that. In fact, this post will be the first of an on-going series regarding reference architectures, architectural strategies and long-term roadmaps within the context of ECM and BPM both. The series will drop from 20,000′ to 200′ as I descend through the cloud on a large technical architecture recommendation and roadmap for a client.  Without belaboring the efforts delineated in that first post and their respective outcomes, this one will take everything I knew up to that point, since that point and beyond as well. I’m going to stretch on this one.

Everything I’ve observed, learned, done in the past twenty years in this space and project it (yes, believe it, you heard, read it here first)  three years forward. Agreed, that’s a long time in today’s world. The issue for this “technical recommendation” though will not be technology, it will be execution. The roadmap will be three years in duration because that will be the degree of effort required to effect it. And this time I have the juice, the influence and the appropriate ears to see it through.

Beyond the initial table of contents screen shot above, this will actually be one of three artifacts I produce for this client, the other one being a Lunch ‘n Learn curriculum I will draft delineating the “how” of everything in the ref arch as well as a third on some of the tools I will be using to blow out portions of the technical recommendation (e.g. – the ontology and the taxonomy).

In short, this thing is going to be a freaking masterpiece. It hasn’t been explicitly mandated, but it has been tacitly requested within the scope of my effort and participation with this client. They’re big, huge in fact; and so will this be as well. As for myself it will be the progenitor to multiple case studies and, longer term, one service offering and one commercial, shrink-wrapped product I’ve long dreamt of, want to build and sell.

So get ready kids. Twenty years in the making, we’re going to get down in the weeds on this and go way beyond the ‘101’ level to infinity and beyond. Should be fun, hope you enjoy my trials, tribulations, angst and tenacity all.

Oh yeah, I’m going to write all three in three months. I’ve got other schtuff to do as well.

So what’s the moral of your story Mr. Peabody?

Never stop trying, never stop learning, always try to do better than the last time.

Read Full Post »

Older Posts »