Archive for the ‘FileNet’ Category

Once upon a time, a long time ago, but not SO very long… time flies.

Jeopardy time. Category – The Old Man and the Content. Answers – Fifteen and Thirty-Eight. Questions – How many times I’ve been to FileNet UserNet or IBM IOD conferences in the past twenty-three years and how many classes I’ve taken from FileNet Education and IBM ECM Education over the years? A hefty investment. In the past five years I’ve made a fairly healthy investment in @pega and @IBM_BPM as well, but nowhere near as significant a one as being a FileNut, the fifteen and the thirty-eight alone easily being a six figure investment, much less lo these two decades’ tenure.

Over the years I’ve been an at-large board member on the UserNet board and have presented three times. They were good exercises, activities in networking and public speaking though the latter’s never been much of an issue. SleazerNet ’93 I asked for and got permission from my employer at the time to run as a board member at large, trotted up to the podium in the ballroom in front of a thousand people, extemporized a speech and got the most votes that year for an at-large member. Felt bad for the people who go I’ve seen go up there over the years and have a meltdown, but I digress. I think there’s still a board, but I don’t know how it works any more, much less how to get on. Think it has to do with the vendor just asking you and you working for a large enough client.

More recently, IOD’s value to me has been to be a “target-rich” environment. Every year I’ve attended IOD, and UserNet before that, somebody I’ve met has turned into an engagement somewhere down the road and therefore more than compensated my cost for attendance. In the past four years it’s become my annual touch base for the state of ECM in the Big Blue world as Info360 was for the ECM world at large for the better part of fifteen years. More on that later vis-a-vis #AIIM. Lately I’ve been going with the intent of focusing on whatever I, we are doing for any given client in-flight at the time, see who else is doing what we’re doing, if they’re doing it better, having the same heartburn.

This year’s focus is a little different, but one thing that’s jumped out at me having just registered at the eleventh hour (also as per usual the past few years) is where the focus is going to be THIS year. Year-before-last it was Big Data, last year it was SMAC. This year’s agenda shows that the big three trends of emphasis will be the segregation, differentiation between content management and case management, and information governance. Thirteen, twenty-eight and twenty-seven sessions respectively across those tracks. The first two are just the latest and greatest terms – IBM Content Foundation and IBM Case Manager – that we apply to the old Content Manager and BPM Manager suites in the P8 days.

In any event, I’m fairly solid on the “I’s” these days, confident that IBM Content Navigator’s TLA (ICN) won’t change, reasonably confident that ICM (remember IBM Classification Module?) and IBM Case Foundation (nee Process Engine) won’t change, but for whatever reasons “Content Foundation” isn’t sticking in my head so much.

Anywayz, items of interest this year:

  1. Certifications. Need ’em for the new and old entities both re: partner certification. Sitting in the lab all six hours tomorrow, blowing off the rah-rah opening keynotes in the big tent first thing Monday morning and maybe during breaks as well. Need four for sure, shooting for eight.
  2. Datacap. Haven’t been a big capture guy for quite a while, but more and more people keep asking for it as a component of engagements. Most of our capture stuff is Kofax, but in the IBM world…
  3. What’s up with SharePoint integration these days? Where we at? Just because Big Blue doesn’t talk about it that much doesn’t mean it doesn’t have to be addressed at more client sites than not.
  4. The $64K question – Where does IBM Case Foundation fit in the BPM world? Is it just the foundation for IBM Case Manager and no longer a standalone product? Remember, it’s still PE under the hood.
  5. Hands-on Labs for current products of interest, mostly ICM I suppose.
  6. Always looking for a product in the vendor expo or a presenter in a breakout to show me something new that makes me go “Wow, that was cool,” but it doesn’t happen very much.

So what’s the moral of your story Mr. Peabody?

  1. I dunno, not much. I’ve been around a good while, technology keeps advancing, organizations, people and processes not so much.
  2. DKDN on going from “Information” to “Insight.” Wish we’d leave names alone, I’m a creature of habit for all my interest in continuing education. Simplifies communicating with and educating the client when we don’t change terms all the time.

What’s next?


Read Full Post »

“One architecture to rule them all. One architecture to find them. One architecture to bring them all and in the machine room bind them.”


This post is not so much a redux of my very first blog post, but rather a continuation of that. In fact, this post will be the first of an on-going series regarding reference architectures, architectural strategies and long-term roadmaps within the context of ECM and BPM both. The series will drop from 20,000′ to 200′ as I descend through the cloud on a large technical architecture recommendation and roadmap for a client.  Without belaboring the efforts delineated in that first post and their respective outcomes, this one will take everything I knew up to that point, since that point and beyond as well. I’m going to stretch on this one.

Everything I’ve observed, learned, done in the past twenty years in this space and project it (yes, believe it, you heard, read it here first)  three years forward. Agreed, that’s a long time in today’s world. The issue for this “technical recommendation” though will not be technology, it will be execution. The roadmap will be three years in duration because that will be the degree of effort required to effect it. And this time I have the juice, the influence and the appropriate ears to see it through.

Beyond the initial table of contents screen shot above, this will actually be one of three artifacts I produce for this client, the other one being a Lunch ‘n Learn curriculum I will draft delineating the “how” of everything in the ref arch as well as a third on some of the tools I will be using to blow out portions of the technical recommendation (e.g. – the ontology and the taxonomy).

In short, this thing is going to be a freaking masterpiece. It hasn’t been explicitly mandated, but it has been tacitly requested within the scope of my effort and participation with this client. They’re big, huge in fact; and so will this be as well. As for myself it will be the progenitor to multiple case studies and, longer term, one service offering and one commercial, shrink-wrapped product I’ve long dreamt of, want to build and sell.

So get ready kids. Twenty years in the making, we’re going to get down in the weeds on this and go way beyond the ‘101’ level to infinity and beyond. Should be fun, hope you enjoy my trials, tribulations, angst and tenacity all.

Oh yeah, I’m going to write all three in three months. I’ve got other schtuff to do as well.

So what’s the moral of your story Mr. Peabody?

Never stop trying, never stop learning, always try to do better than the last time.

Read Full Post »

Making GCD_util sing and dance

ImageJust a quick redux, denouement to the Simultaneously upgrading and migrating Content Engine post from last spring. That project in the case study went live three weeks ago this past Sunday. Ten months in the making, we went through three mock cutovers and a DRE as well before pulling the trigger on that puppy. In fact, after that long of a tenure, we did the DRE and the production cutover two weekends in a row back-to-back. Both had their minor glitches, but the team assembled knew our respective roles and responsibilities well and things – particularly from an end-user perspective – went off quite smoothly when all was said and done.

Anywayz, some further clarifications, enhancements to the GCD_util machinations parts as this came up in conversation in a breakout session at IBM IOD a couple of weeks back. Recall that, amongst other things, concurrent with the infra migrations from WebSphere 6.1.0.x to 7.0.0.x on new hardware, we were also changing the bootstrap ID in the CE .EARs across the environments for purposes of isolation. Without further ado, the final steps that we executed for that piece, captured for posterity:

  1. rungcdutil.bat listdcs (will see DirectoryServerUserName=CN=<LDAP ID> in output)
  2. rungcdutil.bat export (will see Highest GCD version number is: ‘nnn’ and a newgcd.nnn.xml will be generated in the directory where you’ve set up, are running GCD_util).
  3. Edit newgcd.nnn.xml, search for “DirectoryServerUserName” and replace value for <old LDAP bootstrap ID> with “CN=<new LDAP bootstrap ID < fully qualified CN>.”
  4. rungcdutil.bat import -f newgcd.nnn.xml
  5. rungcdutil.bat listdcs (will see “DirectoryServerUserName=CN=<new LDAP bootstrap ID“)
  6. rungcdutil.bat resetdcpasswd -p “password” < DO INCLUDE THE DOUBLE TICKS IF CE host is *NIX.
    Use password for account <new LDAP bootstrap ID>

So what’s the moral of our story Mr. Peabody?

Prior proper planning makes all the difference in the world, as does timing an effort out in accordance with reality, not a project plan.

Any questions, feel free to ask.


Read Full Post »

There’s this guy – Procedure to change the username and/or password for the FileNet Content Engine Directory Service Account, including the bootstrap user. Not too bad. And then there’s this guy – Simultaneously Upgrading and Migrating Content Engine. The latter, given the client’s size, was egregious and onerous both given the number of object stores involved and client bureaucracy within server support groups (OS, middleware, database, security). The time that would be involved was unacceptable. Add to that a need to deal with both Active Directory and ADAM both and it got downright ugly. Now, ask yourself, what if you don’t want to endure the mechanics of the second Tech Note? What if you want to use your old GCD so your new environment can see all of your object stores as soon as you bring it up? Let’s jump straight to the end, three and a half months and five PMRs later, and show something that to the best of my knowledge, given the multiple interactions with IBM Support in the past quarter, hasn’t been done before.

The goal

Migrate an existing P8 4.5.1.x implementation – CE, CSE, PE, AE, WPXT, RM – from WebSphere Application Server ND 6.1.0.x  to new infra on WAS 7.0.0.x on RHEL 5.x 64-bit. DB is Oracle 10g, Microsoft DCs on AD and ADAM both. Old bind ID is AD, new one needs to be on ADAM even though user base is on AD (we’ll get to that later).

Tips, tricks and traps

  1. Theoretically, assuming re-creation of all your JDBC data sources on your new WAS environment, you should be able to take your old CE .EAR off the old environment, use BootStrapConfig.JAR for the new bootstrap ID and deploy on the new WAS instance. Acknowledged by vendor support theoretically, but in truth although many have tried, all have failed. Fuggedabout it. Save yourself a lot of time and heartburn, do a new CE install on the new WAS 7.x instance and when Configuration Manager time comes, do an upgrade profile. < I’ll explain later.
  2. Speaking of Configuration Manager, only second time I’ve ran into this, but the UI for Config Mgr does’na like 64-bit Linux. Libraries you’ll need to make it like 64-bit Linux:
    1. xorg-x11-xauth.i386
    2. xorg-x11-xauth.x86_64
    3. libX11-devel.i386
    4. libXpm.i386
    5. libXpm.x86_64
    6. libXtst.i386
    7. libXtst.x86_64

You can run Configuration Manager from the command line for all of Configure Bootstrap, Configure JDBC OS, Configure LDAP, Configure Login Modules, etc., but it’s a pain. Quicker and easier to get the UI up and running with the above libs.

3. Do not, repeat do not, use TCL, Python, Perl or any other scripts to replicate your data sources from the old WAS environment to the new. Let Config Mgr do all the Configure JDBC OS’es for you to make sure the data sources have all the right params – data helpers, custom properties, etc.
4. Beware WAS certificates. They drive Configuration Manager nutz. Worst case – and easily enough done – use the TCL script for the Application Deploy from the command line to get your new CE .EAR across the line and into WAS. Don’t ask, just trust me.
5. Once you have CE up on a new domain with a new GCD and a new SitePrefs object store, as with step #1 above, do new base installs of AE, WPXT and RM. Again, theoretically, you should be able to – for example – take your old AE .EAR, change values in your WcmApiConfig.properties, deploy it on your new WAS instance, but this also does not work. Remember, end-state is you do want to get back to, use, your old GCD and SitePrefs object stores so you don’t have to re‑create the world when all is done. Save some more time, just do new installs. In this case, base AE 4.0.2, patch to, CE and PE client updaters. Same for WPXT – 1.1.4,, client updaters. Also, before I forget, goes without saying that before you start this exercise backup the tablespaces for your GCD and SitePrefs object store and save these pigs for posterity forever. Or at least until your new environment is fully operational and validated.

So now you have a new environment on new infra, how do you get your old GCD and SitePrefs object store back with all your old environment info? On to the good stuff.

  1. For the GCD, use Config Mgr and point your JDBC data sources back to the tablespace for the old GCD. CE (and the whole domain for that matter) is down when you do this, of course. Now, use GCD_util to re-encrypt the password for your bootstrap ID back into the GCD before bringing CE up. < VERY IMPORTANT
  2. Again via Config Mgr point your data sources for the SitePrefs object store back to the original tablespace for it.
  3. Re-start, validate your FileNet/Engine URL, your Health URL, make sure you can get in to FEM. Speaking of FEM, forget about trying to fix your Verity collections for CSE, delete and re-create them. Standard stuff for isolated regions, connection points, etc. in the Workplace and Workplace XT bootstraps, <blah><blah>
  4. Last but not least, but needed before you can even get your domain up, the issue of AD and ADAM. Short answer, Federated Realms in WebSphere, bind ID looking at ADAM, realm addition looking back at AD so the user base can still get in based upon your bootstrap ID.

So what’s the moral of our story Mr. Peabody?

Lessons learned

  1. If you’ve been around a while and know how a lot of this works under the hood, don’t try to out‑think the system, go too far in circumventing how things work to expedite things. The only major departure, but a very key point, from normal is the stuff in steps 1 and 4 above, the latter was unique to the site in this case study.
  2. Support. If the client’s paying that 18% for vendor support or, better yet, has AVP, use those guys, call ‘em up. They’ve been around a while and have seen a lot of this before. Often times a fresh pair of eyes will illuminate the obvious because you’ve been down in the weeds too long. In one instance we also found a new cache location in WAS I never knew about (knew about wstemp and the temp location under the node for the apps themselves), but getting L3 WebSphere engineers on the line for one particularly gnarly problem yielded a new location I didn’t know about when stopping, undeploying, redeploying AE and WPXT and stuff still didn’t work as I knew it should. In fact, for this PMR, there will be a new Tech Note coming out. Logical path is at the same level as wstemp.
  3. If your database folks are handy and accessible, go the route of the second Tech Note at the beginning of this post and have done with.

Thank you and enjoy your meal. Good night Maureen.

Read Full Post »