Are you really managing your emails?

by Frank 5. August 2012 06:00

It was a long time ago that we all realized that emails were about eighty-percent plus of business correspondence and little has changed today. Hopefully, we also realised that most of us weren’t managing emails and that this left a potentially lethal compliance and legal hole to plug.

I wrote some white papers on the need to manage emails back in 2004 and 2005 (“The need to manage emails” and “Six reasons why organizations don’t manage emails effectively”) and when I review them today they are just as relevant as they were eight years ago. That is to say, despite the plethora of email management tools now available most organizations I deal with still do not manage their emails effectively or completely.

As an recent example  we had an inquiry from the records manager at a US law firm who said she needed an email management solution but it had to be a ‘manual’ one where each worker would decide if and when and how to capture and save important emails into the records management system.  She went on to state emphatically that under no circumstances would she consider any kind of automatic email management solution.

This is the most common request we get. Luckily, we have several ways to capture and manage emails including a ‘manual’ one as requested as well as a fully automatic one called GEM that analyses all incoming and outgoing emails according to business rules and then automatically captures and classifies them within our electronic records and document management system RecFind 6.

We have to provide multiple options because that is what the market demands but it is common sense that any manual system cannot be a complete solution. That is, if you leave it up to the discretion of the operator to decide which emails to capture and how to capture them then you will inevitably have an incomplete and inconsistent solution.  Worse still, you will have no safeguards against fraudulent or dishonest behaviour.

Human beings are, by definition, ‘human’ and not perfect. We are by nature inconsistent in our behaviour on a day to day basis. We also forget things and sometimes make mistakes. We are not robots or cyborgs and consistent, perfect behaviour all controlled by Asimov’s three laws of robotics is a long, long way off for most of us.

This means dear reader that we cannot be trusted to always analyse, capture and classify emails in a one-hundred percent consistent manner. Our excuse is that we are in fact, just human.

The problem is exacerbated when we have hundreds or even thousands of inconsistent humans (your staff) all being relied upon to behave in an entirely uniform and consistent manner. It is in fact ludicrous to expect entirely uniform and consistent behaviour from your staff and it is bad practice and just plain foolish to roll out an email management system based on this false premise. It will never meet expectations. It will never plug all the compliance and legal holes and you will remain exposed no matter how much money you throw at the problem (e.g., training, training and re-training).

The only complete solution is one based on a fully-automatic model whereby all incoming and outgoing emails are analysed according to a set of business rules tailored to your specific needs. This is the only way to ensure that nothing gets missed. It is the only way to ensure that you are in fact plugging all the compliance and legal holes and removing exposure.

The fully automatic option is also the most cost-effective by a huge margin.

The manual approach requires each and every staff member to spend (waste?) valuable time every single day trying to decide which emails to capture and then actually going through the process time and time again. It also requires some form of a licence per employee or per desktop. This licence has a cost and it also has to be maintained, again at a cost.

The automatic approach doesn’t require the employee to do anything. It also doesn’t require a licence per employee or desktop because the software runs in the background talking directly to your email server. It is what we call a low cost, low impact and asynchronous solution.

The automatic model increases productivity and lowers costs. It therefore provides a complete and entirely consistent email management solution and at a significantly lower cost than any ‘manual’ model. So, why is it so hard to convince records managers to go with the fully automatic solution? This is the million dollar question though in some large organizations, it is a multi-million dollar question.

My response is that you should not be leaving this decision up to the records manager. Emails are the business of all parts of any organization; they don’t just ‘belong’ to the records management department. Emails are an important part of most business processes particularly those involving clients and suppliers and regulators. That is, the most sensitive parts of your business. The duty to manage emails transects all vertical boundaries within any organization. The need is there in accounts and marketing and engineering and in support and in every department.

The decision on how to manage emails should be taken by the CEO or at the very least, the CIO with full cognizance of the risks to the enterprise of not managing emails in a one-hundred percent consistent and complete manner.

In the end email management isn’t in fact about email management, it is about risk management. If you don’t understand that and if you don’t make the necessary decisions at the top of your organization you are bound to suffer the consequences in the future.

Are you going to wait for the first law suit or punitive fine before taking action?

What is really involved in converting to a new system?

by Frank 27. May 2012 06:00

Your customer’s old system is now way past its use by date and they have purchased a new application system to replace it. Now all you have to do is convert all the data from the old system to the new system, how hard can that be?

The answer is it that can be very, very hard to get right and it can take months or years if the IT staff or the contractors don’t know what they are doing. In fact, the worst case is that no one can actually figure out how to do the data conversion so you end up two years later still running the old, unsupported and now about to fail system. The really bad news is that this isn’t just the worst case scenario, it is the most common scenario and I have seen it happen time and time again.

People who are good at conversions are good because they have done it successfully many times before. So, don’t hire a contractor based on potential and a good sales spiel, hire a contractor based on record, on experience and on a good many previous references. The time to learn how to do a conversion isn’t on your project.

I will give you guidelines on how to handle a data conversion but as every conversion is different, you are going to have to adapt my guidelines to your project and you should always expect the unexpected. The good news is that if you have a calm, logical and experienced head then any problem is solvable. We have handled hundreds of conversions from every type of system imaginable to our RecFind product and we have never failed even though we have run into every kind of speed bump imaginable. As they say, “expect the best, plan for the worst, and prepare to be surprised.”

1.    Begin by reviewing the application to be converted by looking at the ‘screens’ with someone who uses the system and understands it. Ask the user what fields/data they want to convert. Take screenshots for your documentation. Remember that a field on the screen may or may not be a field in the database; the value may be calculated or generated automatically. Also remember that even though a screen may be called say “File Folder” that all the fields you can see may not in fact be part of the file folder table, they may be ‘linked’ fields in other tables in the database.

2.    You need to document and understand the data model, that is, all the tables and fields and relationships you will need to convert. See if someone has a representation of the data model but, never assume it is up to date. In fact, always assume it is not up to date. You need to work with an IT specialist (e.g., the database administrator) and utilize standard database tools like SQL Server Management Studio to validate the data model of the old system.

3.    Once you think you understand the data model and data to be converted you need to document your thoughts in a conversion report and ask the customer to review and approve it. You won’t get it right first time and expect this to be an iterative process. Remember that the customer will be in ‘discovery’ mode also.

4.    Once you have acceptance of the data to be converted you need to document the data mapping. That is, show where the data will go in the new application. It would be extremely rare that you would be able to duplicate the data model from the old application; it will usually be a case of adapting the data from the old system to the different data model of the new application. Produce a data mapping report and submit it to the customer for sign-off. Again, don’t expect to get this right the first time; it is also an iterative process because both you and the customer are in discovery mode.

5.    Expect that about 20% or more of the data in the old system will be ‘dirty’; that is, bad or duplicate and redundant data. You need to make a decision about the best time to clean up and de-dupe the data. Sometimes it is in the old application before you convert but often it is in the new application after you have converted because the new application has more and better functionality for this purpose.   Whichever method you choose, you must clean up the data before going live in production.

6.    Expect to run multiple trial conversions. The customer may have approved a specification but reading it and seeing the data exposed in the new application are two very different experiences. A picture is worth a thousand words and no one is smart enough to know exactly how they want their data converted until they actually see what it looks like and works like in the new application. Be smart and bring in more users to view and comment on the new application; more heads are better than one and new users will always find ways to improve the conversion. Don’t be afraid of user opinion, actively encourage and solicit it.

7.    Once the data mapping is approved you need to schedule end-user training (as close as possible to the cutover to the new system) and the final conversion prior to cutover.

Of course for the above process to work you also need the tools required to extract data from the old system and import it into the new system. If you don’t have standard tools you will have to write a one-off conversion program. The time to write this is after the data mapping is approved and before the first trial conversion. To make our life easy we designed and build a standard tool we call Xchange and it can connect to any data source and then map and write data to our RecFind 6 system. However, this is not an easy program to design and write and you are unlikely to be able to afford to do this unless you are in the conversion business like we are. You are therefore most likely going to have to design and write a one-off conversion program.

One alternative tool you should not ignore is Microsoft’s Excel. If the old system can export data in CSV format and the new system can import data in CSV format then Excel is the ideal tool for cleaning up, re-sequencing and preparing the data for import.

And finally, please do not forget to sanity check your conversion. You need to document exactly how many records of each type you exported so you can ensure that exactly the same number of records exist in the new system. I have seen far too many examples of a badly managed conversion resulting in thousands or even millions of records going ‘missing’ during the conversion process. You must have a detailed record count going out and a detailed record count going in. The last thing you want is a phone call from the customer a month or two later saying, “it looks like we are missing some records.”

Don’t expect the conversion to be easy and do expect it to be an iterative process. Always involve end-users and always sanity check the results.  Take extra care and you will be successful.

Have you considered Cloud processing? There are significant benefits

by Frank 6. May 2012 06:00

Most of us have probably become more than a little numbed to the onslaught of Cloud advertising and the promotion of the ‘Cloud’ as the salvation for everyone and the panacea for everything. The Cloud is promoted by its aggrandizers as being both omnipotent and omniscient; both qualities I only previously associated with God.

This is not to say that moving business processing to the Cloud is not a good thing; it certainly is. I just wish that the promoters would tone down the ‘sell’ and clearly explain the benefits and advantages without the super-hype.

Those of us with long memories clearly recall the early hype about what was then called ASP or Application Service Processing or even Application Service Provider. This was the early progenitor of the Cloud and despite massive hype it did not fly. The reasons were simple, neither the technology nor the software (application and system) were up to the job. Great idea, pity it was about five years before its time.

Unfortunately, super-hype in our industry is usually associated with immature and unproven technology. Wiser, older people nod sagely and then wait a few years for the technology to catch up with the promises.

As an older (definitely) and wiser (hopefully) person I am now ready to accept that all the technology required for successful and secure Cloud processing is now available and proven; albeit being ‘improved’ all the time so still take care not to rush in with experimental technology.

As with many new technologies the secret is KISS; Keep It Simple Stupid. If it seems too complex then it is too complex. If the sales person can’t answer all of your questions clearly and unambiguously then walk away.

Most importantly, make sure you know all about all of the parties involved in the transaction. For example:

1.    What is the name of the data centre?

2.    Where is it located?

3.    Who ‘owns’ the rack and equipment and software at the data centre?

4.    What are the redundant features?

5.    What are the backup and recovery options?

6.    Is your vendor the owner of the co-hosted facility or do they subcontract to someone else? If they sub-contract is the company they subcontract to the owner or are they too just part of a chain of ‘hidden’ middle-men? It is critical for you to understand this chain of responsibility because if something goes wrong you need to know who to chase.

There are a lot more questions you need to ask but this Blog isn’t the place to list them all. I am sure your IT team and application owners will come up with plenty more. If they don’t, wake them up and demand questions.

Most small to medium organizations today simply do not have the time or expertise to run a computer room and manage and maintain a rack of servers. There is also a dearth of ‘real’ expertise and a plethora of phonies out there so hiring someone who is actually smart enough to manage your critical infrastructure is a very difficult exercise made more so by most business owners and managers simply not understanding the requirements or technology. It often becomes a case of the blind hiring the almost blind.

Most small to medium enterprises also cannot afford the redundancy required to ensure a stable and reliable infrastructure. A fifteen minute UPS is no substitute for a redundant bank of diesel generators and a guaranteed clean power supply.

Why should small to medium enterprises have to buy servers and networks and IT support? It isn’t part of their core business and this stuff should not be weighing down the balance sheet. Why should they be devoting scarce and expensive management time to activities that are not part of their core business?

In-house computer rooms will soon be become as rare as dinosaurs and this is how it should be, they are an anachronism in this time and age; out of time and out of place.

All smart and business savvy small to medium organizations should be planning to progressively move all their processing to the Cloud so as to lower costs, improve service levels and reduce management stress. I say progressively because it is still wise to get wet slowly and to take little steps. Just like with your first two-wheel bicycle, it pays to practice with the training wheels on first. That way, you usually avoid those painful falls.

I like to think I am a little wiser because I still have scars from gravel rash when I was a kid. I am moving my RecFind 6 customers to the Cloud and I am moving my in-house processing to the Cloud but just like you, I am doing it slowly and carefully and triple-checking every aspect. I don’t take risks with my customers or my business and neither should you.

One last thing, I have the advantage of being very IT literate and of having a top IT team working for me so we have the in-house expertise required to correctly evaluate and select the most appropriate technology and options. If you do not have this level of in-house IT expertise then please take extra care and try to find someone to assist who does have the level of IT knowledge required. Once you sign up, it is too late. Buyer’s remorse is not a solution to any problem.

Are you running old and unsupported software? What about the risks?

by Frank 29. April 2012 20:59

Many years ago we released a 16 bit product called RecFind version 3.2 and we made a really big mistake. We gave it so much functionality (much of it way ahead of its time) and we made it so stable that we still have thousands of users.

It is running under operating systems like XP it was never developed for or certified for and is still ‘doing the job’ for hundreds of our customers. Most frustratingly, when we try to get them to upgrade they usually say, “We can’t justify the expense because it is working fine and doing everything we need it to do.”

However, RecFind 3.2 is decommissioned, unsupported and, the databases it uses (Btrieve, Disam and an early version of SQL Server) and also no longer supported by their vendors.

So our customers are capturing and managing critical business records with totally unsupported software. Most importantly, most of them also do not have any kind of support agreement with us (and this really hurts because they say they don’t need a support agreement because the system doesn’t fail) so when the old system catastrophically fails, which it will, they are on their own.

Being a slow learner, ten years ago I replaced RecFind 3.2 and RecFind 4.0 with RecFind 5.0, a brand new 32 bit product. Once again I gave it too much functionality and made it way too stable. We now have hundreds of customers still using old and unsupported versions of RecFind 5.0 and when we try to convince them to upgrade we get that same response, “It is still working fine and doing everything we need it to do.”

If I was smarter I would have built-in a date-related software time bomb to stop old systems from working when they were well past their use-by date. However, that would have been a breach of faith so it is not something we have or will ever do. It is still a good idea, though probably illegal, because it would have protected our customers’ records far better than our old and unsupported systems do now.

In my experience, most senior executives talk about risk management but very few actually practice it. All over the world I have customers with millions of vital business records stored and managed in systems that are likely to fail the next time IT updates desktop or server operating systems or databases. We have warned them multiple times but to no avail. Senior application owners and senior IT people are ignoring the risk and, I suspect, not making senior management aware of the inevitable disaster. They are not managing risk; they are ignoring risk and just hoping it won’t happen in their reign.

Of course, it isn’t just our products that are still running under IT environments they were never designed or certified for; this is a very common problem. The only worse problem I can think of is the ginormous amount of critical business data being ‘managed’ in poorly designed, totally insecure and teetering-on-failure, unsupportable Access and Excel systems; many of them in the back offices of major banks and financial institutions. One of my customers called the 80 or so Access systems that had been developed across his organization as the world’s greatest virus. None had been properly designed, none had any security and most were impossible to maintain once a key employee or contractor had left.

Before you ask, yes we do produce regular updates for current products and yes we do completely redesign and redevelop our core systems like RecFind about every five years to utilize the very latest technology. We also offer all the tools and services necessary for any customer to upgrade to our new releases; we make it as easy and as low cost as possible for our customers to upgrade to the latest release but we still have hundreds of customers and many thousands of users utilizing old, unsupported and about-to-fail software.

There is an old expression that says you can take a horse to water but you can’t make it drink. I am starting to feel like an old, tired and very frustrated farmer with hundreds of thirsty horses on the edge of expiration. What can I do next to solve the problem?

Luckily for my customers, Microsoft Windows Vista was a failure and very few of them actually rolled it out. Also, luckily for my customers, SQL Server 2005 was a good and stable product and very few found it necessary to upgrade to SQL Server 2008 (soon to be SQL Server 2012). This means that most of my customers using old and unsupported versions of RecFind are utilizing XP and SQL Server 2005, but this will soon change and when it does my old products will become unstable and even just stop working. It is just good luck and good design (programmed tightly to the Microsoft API) that some (e.g., 3.2) still work under XP. RecFind 3.2 and 4.0 were never certified under XP.

So we have a mini-Y2K coming but try as I may I can’t seem to convince my customers of the need to protect their critical and irreplaceable (are they going to rescan all those documents from 10 years ago?) data. And, as I alluded to above, I am absolutely positive that we are only one of thousands of computer software companies in this same position.

In fairness to my customers, the Global Financial Crisis of 2008 was a major factor in the disappearance of upgrade budgets. If the call is to either upgrade software or retain staff then I would also vote to retain staff. Money is as tight as it has ever been and I can understand why upgrade projects have been delayed and shelved. However, none of this changes the facts or averts the coming data-loss disaster.

All over the world government agencies and companies are managing critical business data in old and unsupported systems that will inevitably fail with catastrophic consequences. It is time someone started managing this risk; are you?

 

Month List