Archive for the ‘IBM’ Category

Once upon a time, a long time ago, but not SO very long… time flies.

Jeopardy time. Category – The Old Man and the Content. Answers – Fifteen and Thirty-Eight. Questions – How many times I’ve been to FileNet UserNet or IBM IOD conferences in the past twenty-three years and how many classes I’ve taken from FileNet Education and IBM ECM Education over the years? A hefty investment. In the past five years I’ve made a fairly healthy investment in @pega and @IBM_BPM as well, but nowhere near as significant a one as being a FileNut, the fifteen and the thirty-eight alone easily being a six figure investment, much less lo these two decades’ tenure.

Over the years I’ve been an at-large board member on the UserNet board and have presented three times. They were good exercises, activities in networking and public speaking though the latter’s never been much of an issue. SleazerNet ’93 I asked for and got permission from my employer at the time to run as a board member at large, trotted up to the podium in the ballroom in front of a thousand people, extemporized a speech and got the most votes that year for an at-large member. Felt bad for the people who go I’ve seen go up there over the years and have a meltdown, but I digress. I think there’s still a board, but I don’t know how it works any more, much less how to get on. Think it has to do with the vendor just asking you and you working for a large enough client.

More recently, IOD’s value to me has been to be a “target-rich” environment. Every year I’ve attended IOD, and UserNet before that, somebody I’ve met has turned into an engagement somewhere down the road and therefore more than compensated my cost for attendance. In the past four years it’s become my annual touch base for the state of ECM in the Big Blue world as Info360 was for the ECM world at large for the better part of fifteen years. More on that later vis-a-vis #AIIM. Lately I’ve been going with the intent of focusing on whatever I, we are doing for any given client in-flight at the time, see who else is doing what we’re doing, if they’re doing it better, having the same heartburn.

This year’s focus is a little different, but one thing that’s jumped out at me having just registered at the eleventh hour (also as per usual the past few years) is where the focus is going to be THIS year. Year-before-last it was Big Data, last year it was SMAC. This year’s agenda shows that the big three trends of emphasis will be the segregation, differentiation between content management and case management, and information governance. Thirteen, twenty-eight and twenty-seven sessions respectively across those tracks. The first two are just the latest and greatest terms – IBM Content Foundation and IBM Case Manager – that we apply to the old Content Manager and BPM Manager suites in the P8 days.

In any event, I’m fairly solid on the “I’s” these days, confident that IBM Content Navigator’s TLA (ICN) won’t change, reasonably confident that ICM (remember IBM Classification Module?) and IBM Case Foundation (nee Process Engine) won’t change, but for whatever reasons “Content Foundation” isn’t sticking in my head so much.

Anywayz, items of interest this year:

  1. Certifications. Need ’em for the new and old entities both re: partner certification. Sitting in the lab all six hours tomorrow, blowing off the rah-rah opening keynotes in the big tent first thing Monday morning and maybe during breaks as well. Need four for sure, shooting for eight.
  2. Datacap. Haven’t been a big capture guy for quite a while, but more and more people keep asking for it as a component of engagements. Most of our capture stuff is Kofax, but in the IBM world…
  3. What’s up with SharePoint integration these days? Where we at? Just because Big Blue doesn’t talk about it that much doesn’t mean it doesn’t have to be addressed at more client sites than not.
  4. The $64K question – Where does IBM Case Foundation fit in the BPM world? Is it just the foundation for IBM Case Manager and no longer a standalone product? Remember, it’s still PE under the hood.
  5. Hands-on Labs for current products of interest, mostly ICM I suppose.
  6. Always looking for a product in the vendor expo or a presenter in a breakout to show me something new that makes me go “Wow, that was cool,” but it doesn’t happen very much.

So what’s the moral of your story Mr. Peabody?

  1. I dunno, not much. I’ve been around a good while, technology keeps advancing, organizations, people and processes not so much.
  2. DKDN on going from “Information” to “Insight.” Wish we’d leave names alone, I’m a creature of habit for all my interest in continuing education. Simplifies communicating with and educating the client when we don’t change terms all the time.

What’s next?


Read Full Post »

“A COE is “best practices and lessons learned standardized and codified into a framework and repeatable methodology.” – Me

COE Operating Model

Third time in seven days at two conferences that I attended a breakout session on BPM Centers of Excellence. COEs are a subject of interest to me in that I’ve done it four times in the past eight years and it is (should be) about sharing the wealth (knowledge, assets). First time was for a large Southeast financial as an LOB solutions architect protesting the fact that the COE wasn’t getting me the things I needed. Second time for a large bank in the Northeast, the infra guys loved me because, as the lead architect, I kept bad apps from getting into production (the #1 goal in my opinion). Third go ’round I helped spin up the community forum for a large insurer in the Northeast, submitting an operational model for their COE. Latest go ’round is with, once again, a large financial. Supposedly there’s a COE out there somewhere, but it’s pretty ethereal, finding standards, assets, assistance; we’ll talk ‘visibity’ later.

Anywayz, back to AIG’s “AIG Leverages Their Center of Excellence for Strategic Advantage” session a week ago yesterday at PegaWORLD 2014. They use Forrester’s definition of a COE

  • delivery assurance (code reviews, design reviews)
  • resources and skills
  • technology and architecture (common platform)
  • QA
  • infrastructure

and implemented their COE because they had a lot of everything – apps, support groups, partners, platforms. Though they’ve had Pega for a while they implemented their COE with the mission of thought leadership, to accelerate projects, mitigate risks and function as vendor liaison. In-flight for about 1.5 years, they’re running a centralized model and coming out of the gate were, are working on managing resources, frameworks, integrations with centralized repositories on the way down the road. Standard benefits realized on efficiencies, standards, standardized estimating process, re-use, having specialist, driving agile use and achieving resourcing flexibility. Like a lot of people, they’re still working on Agile and did acknowledge they’re still Waterfall within the org. Another good thing they’ve done, IMHO, is implement a community forum (more on that later).

Some other items of note:

  • They use Pega cloud for their sandbox, dev environments and provision quickly to new LOB development teams.
  • Enterprise Architecture and an EA in the role of COE leadership and a member of the core COE team is present; that also is important.
  • Like a lot of people in the Pega world they’ve built a core framework on top of the Pega base.

On the subject of “lessons learned,” appreciating their candor, they did pony up and acknowledge that, like a lot of people that funding at the corporate or project level is an issue. About 40% of their funding comes from the business units (resource allocation). Other lessons learned:

  1. Pega expertise to build guardrail-compliant BPM apps
  2. Alignment with Enterprise Architecture
  3. Central repository and structure
  4. Management support (always key)
  5. They did good on documentation but fell down on putting it out there and visible on a central repository.

The senior information officer hedged when I asked the pointed question about what did they do if any of the dev teams nuked their sandbox, what did they do with a “I call Pega,” but I myself in the past have just reverted the VM (on premise or otherwise) to a previous snapshot and said “Hope you have your stuff backed up.”


Later Monday afternoon there was a panel session with Cisco, JPMC, BNY Mellon and Jabil Circuits talking about their Pega BPM COEs. Five to ten minutes per individual with a short deck each and backgrounders on their COEs, my main grouse about this was there wasn’t enough time left at the end to ask questions. Interestingly enough, just as with the morning session, there were other people from other orgs’ COEs in the audience and I wish a more extensive dialogue could have occurred. If you’re interested, Pega has spun up on on-going forum and as soon as I learn what the true e-mail address for requesting participation is (it’s not PegaWORLD2014COE@pega.com) I’ll pass it on.


Flash forward a week. Last, but not least, Scott Simmons of IBM talking about COEs as well (does Big Blue still call them competency centers?). Scott’s deck demonstrated he’s been around the block a time or two and I was mostly on board with the practical advice he was giving. I did disagree with him on his three most important features of a COE. He stated it was re-usable assets, I maintain it’s governance. Why? Me, myself and I, applications/solutions or systems/enterprise architecture, in the trenches, my number one goal is to keep a bad app from getting into prod.

I’ll look at Scott’s deck later, but interestingly enough he was one of several individuals with some link to Colorado who are here at the conference.


So what’s the moral of your story Mr. Peabody?

  1. It’s about expertise, period, plain and simple.
  2. For me, it’s all about the governance piece. If you don’t have the empowerment, the authorization, the… “teeth,” there’s not much point.

What’s next?



Read Full Post »

“One architecture to rule them all. One architecture to find them. One architecture to bring them all and in the machine room bind them.”


This post is not so much a redux of my very first blog post, but rather a continuation of that. In fact, this post will be the first of an on-going series regarding reference architectures, architectural strategies and long-term roadmaps within the context of ECM and BPM both. The series will drop from 20,000′ to 200′ as I descend through the cloud on a large technical architecture recommendation and roadmap for a client.  Without belaboring the efforts delineated in that first post and their respective outcomes, this one will take everything I knew up to that point, since that point and beyond as well. I’m going to stretch on this one.

Everything I’ve observed, learned, done in the past twenty years in this space and project it (yes, believe it, you heard, read it here first)  three years forward. Agreed, that’s a long time in today’s world. The issue for this “technical recommendation” though will not be technology, it will be execution. The roadmap will be three years in duration because that will be the degree of effort required to effect it. And this time I have the juice, the influence and the appropriate ears to see it through.

Beyond the initial table of contents screen shot above, this will actually be one of three artifacts I produce for this client, the other one being a Lunch ‘n Learn curriculum I will draft delineating the “how” of everything in the ref arch as well as a third on some of the tools I will be using to blow out portions of the technical recommendation (e.g. – the ontology and the taxonomy).

In short, this thing is going to be a freaking masterpiece. It hasn’t been explicitly mandated, but it has been tacitly requested within the scope of my effort and participation with this client. They’re big, huge in fact; and so will this be as well. As for myself it will be the progenitor to multiple case studies and, longer term, one service offering and one commercial, shrink-wrapped product I’ve long dreamt of, want to build and sell.

So get ready kids. Twenty years in the making, we’re going to get down in the weeds on this and go way beyond the ‘101’ level to infinity and beyond. Should be fun, hope you enjoy my trials, tribulations, angst and tenacity all.

Oh yeah, I’m going to write all three in three months. I’ve got other schtuff to do as well.

So what’s the moral of your story Mr. Peabody?

Never stop trying, never stop learning, always try to do better than the last time.

Read Full Post »

Making GCD_util sing and dance

ImageJust a quick redux, denouement to the Simultaneously upgrading and migrating Content Engine post from last spring. That project in the case study went live three weeks ago this past Sunday. Ten months in the making, we went through three mock cutovers and a DRE as well before pulling the trigger on that puppy. In fact, after that long of a tenure, we did the DRE and the production cutover two weekends in a row back-to-back. Both had their minor glitches, but the team assembled knew our respective roles and responsibilities well and things – particularly from an end-user perspective – went off quite smoothly when all was said and done.

Anywayz, some further clarifications, enhancements to the GCD_util machinations parts as this came up in conversation in a breakout session at IBM IOD a couple of weeks back. Recall that, amongst other things, concurrent with the infra migrations from WebSphere 6.1.0.x to 7.0.0.x on new hardware, we were also changing the bootstrap ID in the CE .EARs across the environments for purposes of isolation. Without further ado, the final steps that we executed for that piece, captured for posterity:

  1. rungcdutil.bat listdcs (will see DirectoryServerUserName=CN=<LDAP ID> in output)
  2. rungcdutil.bat export (will see Highest GCD version number is: ‘nnn’ and a newgcd.nnn.xml will be generated in the directory where you’ve set up, are running GCD_util).
  3. Edit newgcd.nnn.xml, search for “DirectoryServerUserName” and replace value for <old LDAP bootstrap ID> with “CN=<new LDAP bootstrap ID < fully qualified CN>.”
  4. rungcdutil.bat import -f newgcd.nnn.xml
  5. rungcdutil.bat listdcs (will see “DirectoryServerUserName=CN=<new LDAP bootstrap ID“)
  6. rungcdutil.bat resetdcpasswd -p “password” < DO INCLUDE THE DOUBLE TICKS IF CE host is *NIX.
    Use password for account <new LDAP bootstrap ID>

So what’s the moral of our story Mr. Peabody?

Prior proper planning makes all the difference in the world, as does timing an effort out in accordance with reality, not a project plan.

Any questions, feel free to ask.


Read Full Post »