What is the future of RecFind? - The Product Road Map

by Frank 19. May 2014 06:00

First a little history. We began in 1984 with our first document management application called DocFind marketed by the then Burroughs Corporation (now called Unisys). In June 1986 we sold the first version of RecFind, a fully-featured electronic records management system and a vast improvement on the DocFind product. Then we progressively added document imaging then electronic document management and workflow and then with RecFind 6 a brand new paradigm and an amalgam of all previous functionality; an Information management system able to run multiple applications concurrently with a complete set of enterprise content management functionality. RecFind 6 is the eighth completely new iteration of the iconic RecFind brand.

RecFind 6 was and is unique in our industry because it was designed to be what was previously called a Rapid Application Development system (RAD) but unlike previous examples, we provided the high level toolset so new applications could be inexpensively ‘configured’ (by using the DRM) not expensively programmed and new application tables and fields easily populated using Xchange. It immediately provided every customer with the ability to change almost anything they needed changed without needing to deal with the vendor (us).  Each customer had the same tools we used to configure multiple applications within a single copy of RecFind 6. RecFind 6 was the first ECM product to truly empower the customer and to release them from the expensive and time consuming process of having to negotiate with the vendor to “make changes and get things done.”

In essence, the future of the RecFind brand can be summarised as more of the same but as an even easier to use and more powerful product. Architecturally, we are moving away from the fat-client model (in our case based on the .NET smart-client paradigm) to the zero-footprint, thin-client model to reduce installation and maintenance costs and to support far more operating system platforms than just Microsoft Windows. The new version 2.6 web-client for instance happily runs on my iPad within the Safari browser and provides me with all the information I need on my customers when I travel or work from home (we use RecFind 6 as our Customer Relationship Management system or CRM). I no longer need a PC at home and nor do I need to carry a heavy laptop through airports.

One of my goals for the remainder of 2014 and 2015 following is to convince my customer base to move to the RecFind 6 web-client from the standard .NET smart-client. This is because the web-client provides tangible, measurable cost benefits and will be the basis for a host of new features as we gradually deprecate the .NET smart-client and expand the functionality of the web-client. We do not believe there is a future for the fat/smart-client paradigm; it has seen its day. Customers are rightfully demanding a zero footprint and the support of an extensive range of operating environments and devices including mobile devices such as smartphones and tablets. Our web-client provides the functionality, mobile device support and convenience they are demanding.

Of course the back-end of the product, the image and data repository, also comes in for major upgrades and improvements. We are sticking with MS SQL Server as our database but will incorporate a host of new features and improvements to better facilitate the handling of ‘big data’. We will continue to research and make improvements to the way we capture, store and retrieve data and because our customer’s databases are now so large (measured in hundreds of Gigabytes), we are making it easier and faster to both backup and audit the repository. The objectives as always are scalability, speed, security and robustness.

We are also adding new functionality to allow the customer to bypass our standard user interface (e.g., the .NET smart-client or web-client) and create their own user interface or presentation layer. The objective is to make it as easy as possible for the customer to create tailored interfaces for each operating unit within their organization. A simple way to think of this functionality is to imagine a single high level tool that lets you quickly and easily create your own screens and dashboards and program to our SDK.

On the add-in product front we will continue to invest in our add-in products such as the Button, the MINI API, the SDK, GEM, RecCapture, the High Speed Scanning Module and the SharePoint Integration Module. Even though the base product RecFind 6 has a full complement of enterprise content management functionality these add-on products provide options requested by our customers. They are generally a way to do things faster and more automatically.

We will continue to provide two approaches for document management; the end-user paradigm (RecFind 6 plus the Button) and the fully automatic capture and classification paradigm (RecFind 6 plus GEM and RecCapture). As has been the case, we also fully expect a lot of our customers to combine both paradigms in a hybrid solution.

The major architectural change is away from the .NET smart-client (fat-client) paradigm to the browser-based thin-client or web-client paradigm. We see this as the future for all application software, unconstrained by the strictures of proprietary operating systems like Microsoft Windows.

As always, our approach, our credo, is that we do all the hard work so you don’t have to. We provide the feature rich, scalable and robust image and data repository and we also provide all of the high level tools so you can configure your applications that access our repository. We also continue to invest in supporting and enhancing all of our products making sure that they have the feature set you require and run in the operating environments you require them to. We invest in the ongoing development of our products to protect your investment in our products. This is our responsibility and our contribution to our ongoing partnership.

 

Is this Microsoft’s worst mistake ever?

by Frank 30. November 2013 06:00

I run a software company called the Knowledgeone Corporation that has been developing application solutions for the Microsoft Windows platform since the very first release of Windows. As always, our latest product offering RecFind 6 version 2.6 has to be tested and certified against the latest release of windows. In this case that means Windows 8.1.

Like most organizations, we waited for the Windows 8.1 release before upgrading our workstations from Windows 7. The only exceptions were our developers workstations because we bought them new PCs with Windows 8 pre-installed.

We are now testing the final builds of RecFind 6 version 2.6 and have found a major problem. The problem is that Microsoft in its infinite wisdom has decided that you can’t install Windows 8.1 over a Windows 7 system and retain your already installed applications.

The only solution is to install Windows 8 first and then upgrade Windows 8 to Windows 8.1. However, if you are running Windows 7 Enterprise this won’t work either and you will be told that you will have reinstall all of your applications.

I am struggling to understand Microsoft’s logic.

Surely Microsoft wants all its customers to upgrade to Windows 8.1? If so, why has it ‘engineered’ the Windows 8.1 upgrade so customers will be discouraged from using it? Does anyone at Microsoft understand how much work and pain is involved in re-installing all your applications?

No, I am not kidding. If you have a PC or many PCs with Windows 7 installed you are going to have to install Windows 8 first in order to maintain all of your currently installed applications. Then, after spending many hours installing Windows 8 (it is not a trivial process) spend more precious time installing Windows 8.1. Microsoft has ensured that you cannot go direct from Windows 7 to Windows 8.1.

Of course, if you are unlucky, you could be living in a country where Microsoft has blocked the downloading of Windows 8, like Australia. Now you are between a rock and a hard place. Microsoft won’t let you install Windows 8 and if you install Windows 8.1 you face days or weeks of frustrating effort trying to re-install all of your existing applications.

 

Here are some quotes from Microsoft:

“You can decide what you want to keep on your PC. You won't be able to keep programs and settings when you upgrade. Be sure to locate your original program installation discs or purchase confirmation emails if you bought programs online. You'll need these to reinstall your programs after you upgrade to Windows 8.1—this includes, for example, Microsoft Office, Apache OpenOffice, and Adobe programs. It's also a good idea to back up your files at this time, too.”

If you're running Windows 7, Windows Vista, or Windows XP, all of your apps will need to be reinstalled using the original installation discs, or purchase confirmation emails if you bought the apps online.”

If the management at Microsoft wanted to ensure the failure of Windows 8.1 they couldn’t have come up with a better plan than the one they have implemented. By making Windows 8.1 so difficult to install they have ensured that its customers will stick with the tried and proven Windows 7 for as long as possible.

Can anyone at Microsoft explain why they thought this was a good idea?

Do you really need a Taxonomy/Classification Scheme with a Records Management System?

by Frank 26. October 2013 06:00

Background

Classification schemes are a way to group or order data; the objective being to group ‘like’ objects together. Classification schemes have been in use for tens of thousands of years, probably beginning when man first realized that there were different types of animals and plants.

We use classifications schemes both to make things easier to find and to add value to a group of objects. By adding value I mean that a classification (describing a group) may provide more information about the members of that group that is obvious from an analysis of a member; this could be referred to as semantics.

Classification schemes are used in all walks of life, for example; in business, in science, in academia and in politics. Are you a liberal or a conservative? Is it a mammal? If it is, is it a marsupial or a monotreme or a placental mammal? This last example illustrates the usual hierarchical arrangement of classification schemes.

In business, we have long used classification schemes to order business documents, that is, records of business transactions. We are all familiar with file folders and filing cabinets; these things are tools of a classification scheme. They make implementing a classification scheme easier as do numbering systems, colors, barcodes and Lektrievers.

With the first commercial availability of mainframe computers in the early 1960s came our first attempts to computerize filing systems. It was also in the 1960s that we saw the first text indexing systems and the first sophisticated search algorithms.

The advent of text indexing and search algorithms allowed us to do a much better job of classifying data but more importantly, they allowed us to do a much better job of finding data.

Let’s not get in a debate about terminology and acronyms

Our industry (information management to use an all-encompassing term) is often its own worst enemy. It creates terms and acronyms at will with both confusing and overlapping definitions. Then it wonders why normal end–users exhibit first bewilderment and then disinterest. Let’s look at a few examples, e.g., RIMS, RMS, DMS, EDRMS, IAMS, CMS, ECM and KMS.

Do you realize that the process of records management is part of each of the preceding acronyms?

For my part I will stick with my old friend the world records management standard, ISO 15489. It tells us that records are evidence of a business transaction and that records are in any form including paper, electronic documents and emails (I know emails are electronic documents but the world generally differentiates them because emails are ‘different’).

So as far as I am concerned the term Records Management System or RMS includes everything we do and is easily recognized and understood so this is the term and acronym I will use in this paper.

Browsing versus searching

Classification systems are very good at making it easier for us to find information by browsing but not very helpful when we are searching.

Most classification systems require you to first ‘browse’ before finding the exact information you want; you usually have to examine multiple objects before you find the one you want. But this is what classifications systems are very good at; because they organize data in a logical (to a human being) way, we usually know where to begin looking. This is why a classification scheme works so well with a manual filing system (multiple cabinets or multiple shelves of file folders)

Classification schemes are great for physical data and, I would say, absolutely necessary for physical data; how else would you organize fifty-thousand file folders (containing seven and a half million pages) in a huge filing room with hundreds of shelves?

However, with computers I don’t need to browse through multiple objects to find the one I want. By using techniques more appropriate to the computer than the filing room, I can search for and find exactly what I want almost instantly. I do not need to leaf through the file folder, I can go directly to the page or directly to the word. I can use the power of the computer.

The following statement will be probably seen as heresy by most practicing records managers but we actually don’t need a classification system (Taxonomy) when computerizing records. We just need a way to index and then search for information.

We need to organize our data so an ordinary end-user can easily find what they need without having to be a trained, professional records manager.

Indexing versus classifying

Now I know my interpretation of these two terms will not thrill everyone but the differentiation is an important part of my hypothesis.

Let’s start by looking at two kinds of books, a reference book and a work of fiction. Both have tables of content (a classification system usually called a TOC) but only one (the reference book) has an index (usually).

The TOC for the reference book is both useful and often used. The TOC for the work of fiction is both not useful and rarely used (readers rarely need more than a bookmark).

The TOC for the reference book is way to organize information into a logical form grouping ‘like’ information together in chapters and sections. A TOC for the work of fiction is just a list of chapters; it serves little or no purpose for the typical ‘end-user’, the reader.

All the reader of a fiction book really needs is two things; a bookmark and a ‘memory’ of the author, title, cover combination so he/she doesn’t accidentally buy it again at the airport bookshop before that dreaded long and boring flight.

The reader of the reference book actually needs both the TOC and the index for browsing (the TOC) and searching (the index).

A work of fiction doesn’t usually have nor need an index because the end-user doesn’t require it. A reference book usually has an index and it is often used to go direct to a page (or pages) and locate something very specific.

Drawing parallels with our broader topic, some information needs both a classification system and an index, some information needs just an index and some doesn’t require either (e.g., works of fiction).

Generally speaking, scientific collections require a classification system (a scientific taxonomy); for example, the study of plant species and the study of animal species (e.g., using a phylogenetic classification system). Scientists simply could not communicate with each other without having a detailed and exact classification system in place. But, most end-users are not scientists; they are just people trying to find the best place to store something and want to find it again with the least amount of effort and pain.

My contention is that we can solve all ‘content management’ and records management needs with a solution based on the application of a sensible, simple and self-evident (read that as easy to use or human-oriented) indexing system plus the required searching capabilities (i.e., covering both Metadata and full text). There is a better way.

What indexing system?

Whenever I consult with customers who are contemplating the capture and organization of data (hopefully into information) I always give the same advice. That is, “When you are thinking about how to index data first think about how you will find it later.” Ask this key question of your end-users, “When you are about to search for information what do you usually know about it?” For example:

  • Do you know the last name?
  • Do you know the first name?
  • Do you know the date of birth?

A good indexing scheme reflects real life usage of the system; it reflects how ordinary humans work and ‘see’ information. Put simply, it indexes the information people will later need to search on. It indexes the information people understand and are comfortable with because it is self-evident.

Indexing Emails

An email is usually described as an unstructured document (the same way a Word or Excel document is described as being ‘unstructured’) but in fact it does have structure. Even better, everyone is familiar with an email’s structure so we have very little to teach end-users; that is, we have a simple and self-evident ‘natural’ set of Metadata items to index.

  1. Date of email
  2. Sender
  3. Recipient
  4. CC
  5. BCC
  6. Subject
  7. Text of the body of the email
  8. Text of any attachments

For any normal end-user trying to find an email this is how they would envision an appropriate search.  They wouldn’t care that the email has been classified down to 6 hierarchies using the world’s most sophisticated Business Classification Scheme (BCS).

Understanding what end-users typically ‘know’ before they do a search determines what elements you have to index. This is the key to implementing a successful indexing system.

The above 8 elements of an email are self-evident insomuch as, “Of course I need to be able to search on the sender or recipient or subject….”

Indexing Electronic Documents

Now let’s look at ordinary electronic documents (i.e., not emails) because they are much less structured. We all know there are ways to add a common structure using features of MS Office like the information dialog box (asking for keywords etc) and templates and smart tags but these things are rarely and inconsistently used.

With shared drives we usually find some form of ‘evolved’ classification system because managing electronic documents in shared drives is akin to managing millions of pieces of paper in tens of thousands of file folders in hundreds of filing cabinets. Unfortunately, the good intentions and purity of design of the original architects of the shared drives folder/sub folder naming conventions (a classification system) are soon corrupted as users make uncoordinated changes and the structure soon becomes unwieldy and incomprehensible.

In my opinion shared drives are OK for the creation of documents (i.e., a work area) but not OK for the management of documents. In fact I would say shared drives are absolutely hopeless for the management of documents as history and practice will attest.

Once again we need an appropriate indexing system and once again we need to ask, “What do people know at the time of the search?” For example:

  1. Original filename
  2. Original path/filename
  3. Type/suffix – e.g., .DOC, .XLS, .PDF, etc
  4. Author
  5. *Subject

Metadata and the Dublin Core

Let me quote from the Dublin Core website:

http://dublincore.org/

“The Dublin Core Metadata Element Set is a vocabulary of fifteen properties for use in resource description. The name "Dublin" is due to its origin at a 1995 invitational workshop in Dublin, Ohio; "core" because its elements are broad and generic, usable for describing a wide range of resources.”

To quote Wikipedia:

http://en.wikipedia.org/wiki/Dublin_Core

“It provides a simple and standardized set of conventions for describing things online in ways that make them easier to find. Dublin Core is widely used to describe digital materials such as video, sound, image, text, and composite media like web pages.”

The Simple Dublin Core Metadata Element Set (DCMES) consists of 15 elements.

  1. Title
  2. Creator
  3. Subject
  4. Description
  5. Publisher
  6. Contributor
  7. Date
  8. Type
  9. Format
  10. Identifier
  11. Source
  12. Language
  13. Relation
  14. Coverage
  15. Rights

To my mind the Dublin Core is an excellent set of elements for describing almost any ‘record’ because it is both simple and appropriate to both computers and ‘normal’ end-users. As a professional, I like the elegance of the Dublin Core.

I also like the basic principle because it fits in with my hypothesis. That is, there is a better way to store, index and find records than a complex and unwieldy Taxonomy.

The Full Solution?

  • We need an application that stores documents of all types, i.e., all types of content.
  • We need an application that indexes both Metadata and full text.
  • We need an application with a customer configurable Metadata model.
  • We need an application that allows you to search on both Metadata and full text in a single search.
  • We need a search that combines BOOLEAN and numeric operators, e.g., AND, OR, NOT, =, <, >, etc.
  • We need a ‘standard’ Metadata definition (a Class if you will) that includes a simple (not more than 20 in my estimation) set of data elements that includes all of the elements necessary to index all of the types of documents (including file folders and paper) that you manage.
  • We need an application that includes all types of data capture, e.g., from the file system, from the native application, from a scanner, etc.
  • We need an application with a comprehensive security system.
  • We need an application with all reporting options, e.g., both standard reports and ad hoc reports.
  • We need an application with a configurable audit trail.
  • We need an application with comprehensive import and export capabilities.

 

The standard Metadata definition (Master Metadata Class)

I have come up with a limited set of elements that I believe can be used to index and find any type of record, paper or electronic. I have borrowed heavily from the Dublin Core because it makes good sense to do so; there is no need to reinvent the wheel.

#

Element

Explanation

1

Title

A name given to the record. Typically, a Title will be a name by which the record is formally known.  Text, e.g., "Business Plan for 2010"

2

Author(s)

The sender or author, E.g., Mark Twain or f.mckenna@k1corp.com

3

Dated

The original date of the document or published date

4

Date Received

Date received by the recipient or recipient's organization, whichever is the earlier

5

Original Name

e.g., filename or file\pathname for electronic documents  - C:\franks stuff\sample.xls

6

Primary Identifier

An unambiguous reference to the record within a given context. E.g., The file number

7

Secondary Identifier

An unambiguous reference to the record within a given secondary context. E.g., The case number or contract number or employee number

8

Barcode

Barcode number or RFID tag

9

Subject

The topic of the record. Typically, the subject will be represented using keywords or key phrases. Recommended best practice is to use a controlled vocabulary.

10

Description

An account of the record. Description may include but is not limited to: an abstract, a table of contents, a graphical representation, or a free-text account of the record.

11

Content

Words or phrases from the text content of the main document and attached documents

12

Contents

Description of contents if the document is a container, e.g., an archive box

13

Recipient(s)

Addressed to, sent to etc. People or organizations.

14

CC recipient(s)

CC and BCC recipients

15

Publisher

An entity responsible for making the record available.  Company or organization that either published the document or that employs the author

16

Type

The nature or genre of the record, usually from a controlled list, e.g., complaint, quotation, submission, application, etc.

17

Format

The file format, physical medium, or dimensions of the record. E.g., Word, Excel, PDF, etc

18

Language

e.g., English, French, Spanish

19

Retention

 The retention code determining the record’s lifecycle

20

Security

Access rights, security code, etc

 

My contention is that by using an ‘index set’ like the above 20 Metadata elements you can index, manage and retrieve any ‘record’ regardless of form and content.

What about all the standards ‘out there’?

There is a plethora of local, state, federal, industry and international standards pertaining to the management of records. Examples are DoD 5015, MoReq2, Dublin Core, ISO 15489, VERS etc and literally thousands of standards for Metadata.

The problem with most of these standards is that they are extraordinarily difficult to read and understand (even the Dublin Core documentation can be heavy going). I would draw a parallel back to the times when the Bible was in Latin but Christians were supposed to order their lives by its teachings. The problem being that only about 0.025% of Christians spoke Latin. Ergo, how do you order your life by a book you can’t read?

My assertion is that most records managers do not fully understand the standards they are charged with enforcing.

The problem isn’t with the records managers; it is with the people who write the standards. The standards are not written for records managers, they are written for academics and technical people (i.e., systems engineers who are experts in XML).  Just like the Latin Bible, they are not written in the language of the intended user.

And even when you do think you have a grasp of the fundamentals there are always multiple points to be clarified (as to the exact meaning) with the standards authority.

What about Retention/Disposal schedules?

This should probably be the subject of another paper because retention schedules have also become way too complex, unwieldy and difficult to understand and apply.

The question will be, “How can I do away with my classification system when my retention codes are linked to it?”

I have looked at hundreds of retention schedules and every single one has been way too complicated for the organization trying to use it. Another problem is that very few of the authorities that compile retention schedules do so with computers in mind. This means that we end up with lots of very vague conditional statements that are almost impossible to computerize.

Most retention schedules are written for archivists to read, not for computers to process. This is the heritage of retention schedules; they assumed an appraisal process by a trained and expert archivist.

The Continuum model or ‘Whole of Life’ model or File Plan model all assume we will allocate a retention code at the time the record is created, not during a later appraisal process. This made much more sense and allowed us to better manage the record throughout its life cycle. However, many such schemes also linked the retention code to a classification term or embedded the retention codes within the classification system. This of course made the classification system even more complex and difficult to understand and apply.

To my mind no organization needs more than ten retention codes (shortest period, longest period and eight in between) and three life cycles (e.g., active, inactive, destroyed). This is also probably heresy to a lot of the records management profession but, I would ask them to think about the proposition that something that was entirely appropriate to the manual world is not necessarily entirely appropriate to the computerized world. There is an easier and simpler way to manage retention and there is no need to embed retention codes into the classification system just as there is no need for a classification system in any modern, computerized records management system.

What about File Folders and Archive Boxes?

This is the classic stumbling block. This is when the records manager tells you that all the standards require you to use the same taxonomy for emails and electronic documents that he/she uses for traditional file folders and archive boxes.

You need to explain that the classification from the manual paper handling world is inappropriate to the computerized world, that it is an anachronism. You need to explain that all it will add is complexity, massive cost, confusion and a seriously negative attitude to end-users. You should say it is time to discard techniques and tools from the eighteenth century and adopt techniques from the twenty-first century. You should say you have a much better way. Then you should probably duck and run. Failing all else, blame me and give them my email address.

 

 

What is happening with the Tablet market?

by Frank 18. August 2013 06:00

I run a software company called the Knowledgeone Corporation and our main job is to provide the tools to capture, manage and find content. As such, we need to be on top of the hardware and software systems used by our customers so that we can constantly review and update our enterprise content management products like RecFind 6 so that they are appropriate to the times and devices in use.

I have spoken in previous Blogs about tablets and form factors and what is needed for business so other than providing the following links, I won’t go over old ground.

Will the Microsoft surface tablet unseat the iPad?

The PC is dead, or is it?

What will be the next big thing in IT?

Could you manage all of your records with a mobile device?

Why aren’t tablets the single solution yet?

The real impact of mobilization – How will it affect the way we work?

Mobile and the Web – The real future of applications?

Form factor – The real problem with mobile devices doing real work

Since my last Blog on the subject we have all seen RT tablets come and go (there will be a big landfill of RT tablets somewhere) and we are now all watching the slow and painful demise of Blackberry. In both of these cases we have to ask how big, super-clever companies like Microsoft and Blackberry could get it so wrong. Just thinking about the number of well-educated and highly experienced marketing and product people they have, it is inconceivable that they couldn’t work out what the average Joe in the street could have told them for free.

Then let’s also think about HP’s disastrous experiment with its TouchPad tablet (another e-waste landfill) and it becomes apparent that some of the largest, richest and best credentialed companies in the world can’t forecast what will happen in the tablet market.

In my opinion the problem all along, apart from operating system selection (iOS or Android?), has been matching needs to form factor and processing power. For example, no one wants a 12 inch phone and no one wants to write and read large documents on a 3 inch screen. This is why most of us still carry around three devices instead of one; a phone, a tablet and a laptop. This is just plain silly, what is the point of a small form factor device if I have to supplement it with a large form factor device? Like most other users, I really just want to carry around one device and I want it to have the capabilities and processing power for all the work I do.

It is for this reason that I believe the next big thing in the tablet market will be based on phones, not tablets. I envision slightly larger and much more powerful phones with universal connectors (are you listening Apple?) and docking capability. I would also like it to have a minimum of 4G and preferably 5G when available.

I want to be able to use it as a phone and when I get to my office I want to connect it to my keyboard, screen and network. I want to be able to connect it to a projector when visiting customers and prospects and I want a dynamically sizing desktop that knows when to automatically adjust the display to the form factor being viewed. That is, I want a different desktop for my screen at work than I want on the phone screen when travelling.

This brings up an interesting issue about choice of operating system as Windows owns about 95% of all business PCs and servers. I have previously never thought about buying a Windows Phone (I had one once a few years ago with Windows CE and it was awful) but my ideal device is going to have to run on the Windows operating system to be really usable in my new one-device paradigm.

I wonder why Microsoft didn’t think of this?

Why don’t you make it easy for end users to find what they need?

by Frank 8. June 2013 06:00

Many records managers and librarians still hold on to the old paradigm that says if a user wants something they should come though the information management professional. They either believe that the end user can’t be trusted to locate the information or that the task is so complex that only an information professional can do it in a proper and professional manner.

This approach to tightly controlled access to information has been with us for a very long time; unfortunately, not always to the benefit of the end user. It is often interpreted as a vice-grip on power rather than a service by the end users.

In my experience, (many years designing information and knowledge management solutions), most end users would like the option of searching for themselves and then deciding whether or not to request assistance.

Of course it may also be true that the system in use is so complex or so awkward to use that most end users (usually bereft of training) find it too hard to use and so have to fall back on asking the information professional. However, if this is the case then there will invariably be a backlog of requests and end users will be frustrated because they have to wait days or weeks for a response. In this environment, end users begin to feel like victims rather than valued customers or ‘clients’.

The obvious answer is to make it easy for end users to find what they are looking for but this obvious answer seems to escape most of us as we continue to struggle with the obscure vagaries of the existing system and an often impenetrable wall of mandated policies, processes and official procedures.

If we really want a solution, it’s time to step outside of the old and accepted model and provide a service to end users that end users actually want, can use and appreciate. If we don’t take a wholly new approach and adopt a very different attitude and set of procedures then nothing will improve and end user dissatisfaction (and anger) will grow until it reaches the point where they simply refuse to use the system.

End users are not stupid; end users are dissatisfied.

One of the core problems in my experience is an absence of an acceptance of the fact that the requirements of the core, professional users are very different to the requirements of the end users. At the risk of oversimplifying it, end users only need to know what they need to know. End users need a ‘fast-path’ into the system that allows them to find out what they need to know (and nothing more) in the shortest possible time and via the absolutely minimum amount of keystrokes, mouse-clicks or swipes.

End users need a different interface to a system than professional users.

This isn’t because they are less smart, it is because the ‘system’ is just one of the many things they have to contend with during a working day, it is not their core focus. They don’t have time (or the interest) to become experts and nor should they have to become experts.

If end users can’t find stuff it isn’t their fault; it is the system’s fault.

The system of course, is more than just the software. It is the way menus and options are configured and made available, it is the policy and procedures that govern access and rights to information. It is the attitude of those ‘in-power’ to those that are not empowered.

If you want happy and satisfied end users, give them what they need.

Make sure that the choices available to an end user are entirely appropriate to each class of end user. Don’t show them more options then they need and don’t give them more information than they are asking for. Don’t ask them to navigate down multiple levels of menus before they can ask the question they want to ask; let them ask the question as the very first thing they do in the system. Then please don’t overwhelm them with information; just provide exactly and precisely what they asked for.

If you want the end users off your back, give them what they need.

I fall back on my original definition of a Knowledge Management system from 1997, “A Knowledge Management system is one that provides the user with the explicit information required, in exactly the form required, at precisely the time the user needs it.”

With hindsight, my simple definition can be applied to any end user’s needs. That is, please provide a system that provides the end user with the explicit information required, in exactly the form required, at precisely the time the end user needs it.

What could be more simple?

More references:

The IDEA – 1995

Knowledge Management, the Next Challenge? - 1997

Whatever happened to the Knowledge Management Revolution?  – 2006

A Knowledge Management System – A Discourse – 2008

 

Is the IT industry faltering because we have all just lost interest?

by Frank 15. April 2013 06:00

I have just read another IDC industry reporting talking about how PC sales have plunged 14 percent in the first three months of 2013. The report goes on to show that this is a worldwide trend, not just in the USA or Asia Pacific. Europe for example, was the worst with a 16 percent decline.

I also read lots of industry reports telling me how unsuccessful Windows 8 has been, much worse even than the dreaded Vista. Even Microsoft with its huge marketing budget has not been able to buck the trend. Apple also reports lower sales of its PCs and the report suggests they may have been cannibalized by Apple’s own tablets (how ironic).

Is it all to do with the ongoing world financial crisis? Do we blame the politicians and bureaucrats of Ireland, Iceland, Spain, Portugal, Italy, Greece and now Cypress for this massive fall off in PC Shipments? Or, as I surmise, are we all more than a little bored with the IT industry, its hype and the too regular platform changes forced upon us? Are we all jaded by a decade of too rapid and unneeded change?

I like Windows 7, it works, it is stable and it allows me to run all the programs I need for my business. Why would I upgrade especially as I am going to have to retrain all my staff and also have to upgrade a lot of the software and hardware I use? What compelling reason is there to upgrade to Windows 8?

Similarly, my desktops and servers are now 3 to 4 years old but I bought high quality Dell OptiPlex PCs and Dell Xeon rack servers and they are all still more powerful than I need and still working fine. When something occasionally fails I just pay Dell to fix or replace it. It is a lot less disruptive and a lot less costly than replacing everything. What compelling reason is there for me to suffer the pain and disruption of replacing my PCs and servers?

Of course the world financial crisis has a lot to do with the tumbling PC sales figures because most organisations are still cutting costs to maintain or grow profits. However, I also detect a sea change in attitudes among my peer groups and customers. We have had enough of constant change for change’s sake. Most of the people I deal with are now sticking by the old maxim of “If it ain’t broke, don’t fix it.”

It looks like a lot of us have all lost interest in technology, we have even become bored and blasé about technology. So it is 10% lighter and 15% faster, “who cares?’ So it is prettier and has even more features I won’t ever use, “who cares?” There is another iPhone that is slightly bigger and slightly thinner than the last one, “who cares?” There is yet another update to Linux or Android, “who cares?”

I own and run a computer software company called Knowledgeone Corporation that builds and markets a range of enterprise content management software applications under the banner of RecFind 6. Because of this I am vitally interested in what is happening both with the ongoing world financial crisis and PC shipments because both affect my business.

Just like my customers, I am fed up with the industry trying to force feed me with new products that I don’t need and frankly, am just not interested in. I am the same as my customers, they just want my products to work day in and day out, 24/7, and do the job they were purchased for. They will buy maintenance because that protects their investment in my products but right now, most aren’t really ready to face or fund a massive change in their operation unless there is a damn good reason with a sound business justification.

I believe one of the main reasons PC sales are down, in addition to the world financial crisis, is because right now we just aren’t interested in new technology for technologies sake. We are more interested in running our businesses in the most cost effective manner and maintaining profitability. We are also tired of the IT industry trying to hard sell another ‘new thing’ every 3 years or so.

I don’t need new PCs, I don’t need new servers, I don’t need the next iPhone or update to Android. I think the world as a whole is now clearly differentiating between need and want and if need rather than want is driving the system then trying to woo us with faster, thinner, prettier technology just isn’t going to work. Frankly, I think we are bored with technology and all have more important things to think about like how to remain profitable and protect our companies and the jobs of our staff.

Maybe we are all waiting for the It industry to come up with something really, really interesting and really, really useful that will actually help us strengthen our bottom line? Now that would be something new.

Are you still losing information in your shared drives?

by Frank 18. November 2012 06:00

Organizations both large and small, government and private have been accumulating electronic documents in shared drives since time immemorial (or at least since the early 1980’s when networked computers and file servers became part of the business world). Some organizations still have those early documents, “just in case”.

Every organization has some form of shared drives whether or not they have an effective and all-encompassing document management system in place (and very few organizations even come close to meeting this level of organization).

All have megabytes (1 million bytes or characters, 106=ten to the power of 6) of information stored in shared drives, the vast majority has gigabytes (109), many now have terabytes (1012) and the worst have petabytes (1015).

As all the IT consultants are now fixated on “Big Data” and how to solve the rapidly growing problem it won’t be long before we are into really big numbers like exabytes (1018), zettabytes (1021) and finally when civilization collapses under the weight, yottabytes. For the record, a yottabyte is 1024 or one quadrillion gigabytes or to keep it simple, one septillion bytes. And believe me the problem is real because data breeds faster than rabbits and mice.

Most of this electronic information is unstructured (e.g., Word and text files of various kinds) and most of it is unclassified (other than maybe being in named folders or sub-folders or sub-sub-folders). None of it is easily searchable in a normal lifetime and there are multiple copies and versions some of which will lead to legal and compliance nightmares.

The idea of assigning retention schedules to these documents is laughable and in general everyone knows about the problem but no one wants to solve it. Or, more precisely, no one wants to spend the time and money required to solve this problem. It is analogous to the billions of dollars being wasted each year by companies storing useless old paper records in dusty offsite storage locations; no one wants to step up and solve the problem. It is a race to see which will destroy civilization first, electronic or paper records.

When people can’t find a document they create a new one. No one knows which is the latest version and no one wants to clean up the store in case they accidentally delete something they will need in a month or a year (or two or three). Employees often spend far more (frustrating) time searching for a document to use as a template or premise than it would take to create a new one from scratch.

No one knows what is readable (WordStar anyone?) and no one knows what is relevant and no one knows what should be kept and what should be destroyed. Many of the documents have become corrupted over time but no one is aware of this.

Some organizations have folders and sub folders defined in their shared drives which may have at one time roughly related to the type of documents being stored within them. Over time, different people had different ideas about how the shared drives and folders should be organized and they have probably been changed and renamed and reorganized multiple times.  Employees however, didn’t always follow the rules so there are miss-filings, dangerous copies and orphans everywhere.

IT thinks it is an end user problem and end users think it is an IT problem.

The real problem is that most of these unstructured documents are legal records (evidence of a business transaction) and some are even vital records (essential to the ongoing operation of the entity). Some could be potentially damaging and some could be potentially beneficial but no one knows. Some could involve the organization in legal disputes, some could involve the organization in  compliance disputes and some could save the organization thousands or millions of dollars; but no one knows.

Some should have been properly destroyed years ago (thus avoiding the aforementioned legal and compliance disputes) and some should never have been destroyed (costing the organization evidence of IP ownership or a billable transaction). But, no one knows.

However, everyone does know that shared drives waste an enormous amount of people’s time and are a virtual ‘black hole’ for both important documents and productivity.

There is a solution to the shared-drives problem but it can’t happen until some bright and responsible person steps up and takes ownership of both the problem and the solution.

For example, here is my recommendation using our product RecCapture (other vendors will have similar products designed as ours is to automatically capture all new and modified electronic documents fully automatically according to a set of business rules you develop for your organization). RecCapture is an add-on to RecFind 6 and uses the RecFind 6 relational database to store all captured documents.

RecCapture allows you to:

  • Develop and apply an initial set of document rules (which to ignore, which to keep, how to store and classify them, etc.) based on what you know about your shared drives (and yes, the first set of rules will be pretty basic because you won’t know much about the vast amount of documents in your shared drives).
  • Use these rules to capture and classify all corporate documents from your shared drives and store and index them in the RecFind 6 relational SQL database (the initial ‘sweep’).
  • Once they are in the relational database you can then utilize advanced search and global change capabilities to further organize and classify them and apply formal retention schedules.You will find that it is a thousand times easier to organize your documents once they are in RecFind 6.
  • Once the documents are saved in the RecFind 6 database (we maintain them in an inviolate state as indexed Blobs) you can safely and confidently delete most of them from your shared drives.
  • Then use these same document rules (continually being updated as you gain experience and knowledge) to automatically capture all new and modified (i.e., new versions) electronic documents as they are stored in your shared folders. Your users don’t need to change the way they work because the operation of RecCapture is invisible to them, it is a server-centric (not user-centric) and a fully automatic background process.
  • Use the advanced search features, powerful security system and versioning control of RecFind 6 to give everyone appropriate access to the RecCapture store so users can find any document in seconds thus avoiding errors and frustration and maximizing productivity and job satisfaction.

RecCapture isn’t expensive, it isn’t difficult to set up and configure and it isn’t difficult to maintain. It can be installed, configured and operational in a few days. It doesn’t interfere with your users and doesn’t require them to do anything other than their normal work.

It captures, indexes and classifies documents of any type. It can also be used to automatically abstract any text based document during the capture process. It makes all documents findable online (full text and Metadata) via a sophisticated search module (BOOLEAN, Metadata, Range searching etc.) and military strength security regime.

Accredited users can access the document store over the network and over the Internet.  Stored documents can be exported in native format or industry standard XML. It is a complete and easy to implement solution to the shared drives problem.

I am sure that Knowledgeone Corporation isn’t the only vendor offering modern tools like RecFind 6 and RecCapture so there is no excuse for you continuing to lose documents in your shared drives.

Why don’t you talk to a few enterprise content software vendors and find a tool that suits you? You will be amazed at the difference in your work environment once you solve the shared drives problem.  Then ask the boss for a pay rise and a promotion; you deserve it.

Can you save money with document imaging?

by Frank 4. November 2012 06:00

I run a software company called Knowledgeone Corporation that produces an enterprise content management solution called RecFind 6 that includes extensive document imaging capabilities. We have thousands of customers around the world and as far as I can see most use RecFind 6 for document imaging of one kind or another.

This certainly wasn’t the case twenty years ago when document imaging tools were difficult to use and were expensive stand-alone ‘specialised’ products. Today however, almost every document management or records management product includes document imaging capabilities as a normal part of the expected functionality. That is, document imaging has gone from being an expensive specialised product to just a commodity, an expected feature in almost any information management product.

This means most customers have a readily available, easy-to-use and cost-effective document imaging tool at their fingertips. That being the case there should be no excuse for not utilizing it to save both time and money. However, I guarantee that I could visit any of my customers and quickly find unrealised opportunities for them to increase productivity and save money by using the document imaging capabilities of my product RecFind 6. They don’t even have to spend any money with me because the document imaging functions of RecFind 6 are integrated as ‘standard’ functionality and there is no additional charge for using them.

So, why aren’t my customers and every other vendor’s customers making best use of the document imaging capabilities of their already purchased software?

In my experience there are many reasons but the main ones are:

Lack of knowledge

To the uninitiated document imaging may look simple but there is far more to it than first appears and unless your staff have hands-on experience there is unlikely to be an ‘expert’ in your organization. For this reason I wrote a couple of Blogs earlier this year for the benefit of my customers; Estimating the cost of your next imaging job and The importance of document imaging. This was my attempt to add to the knowledge base about document imaging.

Lack of ownership

The need for document imaging transects the whole enterprise but there is rarely any one person or department charged with ‘owning’ this need and with applying best-practice document imaging policies and procedures to ensure that the organization obtains maximum benefits across all departments and divisions. It tends to be left to the odd innovative employee to come up with solutions just for his or her area.

Lack of consultancy skills

We often say that before we can propose a solution we need to know what the problem is. The way to discover the true nature of a problem is to deploy an experienced consultant to review and analyse the supposed problem and then present an analysis, conclusions and recommendations that should always include a cost-benefit analysis. In our experience very few organizations have staff with this kind of expertise.

Negative impact of the Global Financial Crisis that began in 2008

All over the world since 2008 our customers have been cutting staff and cutting costs and eliminating or postponing non-critical projects. Some of this cost cutting has been self-defeating and has produced negative results and reduced productivity. One common example is the cancelling or postponing of document imaging projects that could have significantly improved efficiency, productivity and competitiveness as well as reducing processing costs.  This is especially true if document imaging is combined with workflow to better automate business processes.  I also wrote a Blog back in July 2012 for the benefit our customers to better explain just what business process management is all about called Business Process Management, just what does it entail?

In answer to the original question I posed, yes you can save money utilizing simple document imaging functionality especially if you combine the results with new workflow processes to do things faster, more accurately and smarter. It is really a no-brainer and it should be the easiest cost justification you have ever written.

We have already seen how most information management solutions like RecFind 6 have embedded document imaging capabilities so most of you should have existing and paid-for document imaging functionality you can leverage off.

All you really need to do to save your organization money and improve your work processes is look for and then analyse any one of many document imaging opportunities within your organization.

A clue, wherever there is paper there is a document imaging opportunity.

Will the Microsoft Surface tablet unseat the iPad?

by Frank 28. October 2012 06:00

I run a software company called Knowledgeone Corporation that produces a content management system called RecFind 6. We need to be on top of what is happening in the hardware market because we are required to support the latest devices such as Apple’s iPad and Microsoft’s Surface tablet. Our job after all is to capture and manage content and the main job of devices like the iPad and Surface tablet is to allow end users to search for and display content.

At this time we plan to support both with our web client but each device has its special requirements and we need to invest in our software to make sure it perfectly suits each device. The iPad is by now a well-known partner but the Surface tablet is still something of a mystery and we await the full local release and our first test devices.

As we produce business software for corporations and government our focus is on the use of tablets in a business scenario.  This means using the tablets for both input and output meaning, capturing information and documents from the end user and well as presenting information and documents to the end user.

When looked at from a business perspective the Surface tablet starts to be a much better proposition for us than the iPad. I say this because of three factors; connectivity, screen size and open file system. To my mind these are the same three factors that severely limit the use of the iPad in a business environment.

Let me elaborate; I can connect more devices to the Surface, the slightly larger screen makes it easier to read big or long documents and the open file system allows us to easily upload and download whatever documents the customer wants. Ergo, the Surface is a much more useful product for our needs and the needs of our corporate and government customers.

So, after a superficial comparison, the Surface appears to have it all over the iPad or does it?

Maybe not given the early reviews of the buggy nature of Windows 8 on RT. Maybe not given that Windows 8 will never be as easy to use or as intuitive as iOS. Maybe not given that the iPad just works and no end user ever needed a training course or user manual. I very much doubt that end users will ‘learn’ Windows 8 as easily as they learnt iOS.

One unkind reviewer even referred to the Surface as a light-weight notebook.  I don’t agree though with its attached keyboard it is very close. I do think it is different to a notebook and I do applaud Microsoft for its investment and innovation. I think the Surface is a new product as opposed to a new generation notebook and I think most end users will see it that way too.

As is often the case both products have strengths and weaknesses and the real battle is yet to come as early adopters buy the Surface and test it. This is a critical time for acceptance and I hope Microsoft hasn’t released this product before it is ready. The early reviews I have read about the RT version are not encouraging especially as everyone still has awful memories of the Visa experience.

Microsoft is super brave because it is releasing two new products at the same time, the Surface hardware and Windows 8. Maybe it would have been smarter to get Windows 8 out and proven before loading it on the Surface but my guess is that Microsoft marketing is in one hell of hurry to try to turn the iPad tide around. There must be a lot of senior executives in Microsoft desperate to gain control of the mobile revolution in the same way they dominated the PC revolution. The Surface plus Windows 8 is a big-bang approach rather than the more conservative get-wet-slowly approach and I sincerely wish them all the best because we all need a much better tablet for business use. Apple also needs a little scare to remind it to be more respectful of the needs of its customers. Competition is always a good thing for consumers and Apple has had its own way with the iPad for too long now.

Don’t get me wrong, I love my iPad but I am frustrated with its shortcomings and I am hoping that more aggressive competition will force them to lift their game and stop being so damn arrogant.

I am about to place my orders for some Surface tablets for testing as soon as the Windows 8 Pro version is available and promise an update sometime soon about what we find. Watch out for an update in a month or so.

Are you also confused by the term Enterprise Content Management?

by Frank 16. September 2012 06:00

I may be wrong but I think it was AIIM that first coined the phrase Enterprise Content Management to describe both our industry and our application solutions.

Whereas the term isn’t as nebulous as Knowledge Management it is nevertheless about as useful when trying to understand what organizations in this space actually do. At its simplest level it is a collective term for a number of related business applications like records management, document management, imaging, workflow, business process management, email management and archiving, digital asset management, web site content management, etc.

To simple people like me the more appropriate term or label would be Information Management but as I have already covered this in a previous Blog I won’t beleaguer the point in this one.

When trying to define what enterprise content management actually means or stands for we can discard the words ‘enterprise’ and ‘management’ as superfluous to our needs and just concentrate on the key word ‘content’. That is, we are talking about systems that in some way create and manage content.

So, what exactly is meant by the term ‘content’?

In the early days of content management discussions we classified content into two broad categories, structured and unstructured. Basically, structured content had named sections or labels and unstructured content did not. Generalising even further we can say that an email is an example of structured content because it has commonly named, standardised and accessible sections or labels like ‘Sender’, ‘Recipient’, ‘Subject’ etc., that we can interrogate and rely on to carry a particular class or type of information. The same general approach would regard a Word document as unstructured because the content of a Word document does not have commonly named and standardised sections or labels. Basically a Word document is an irregular collection of characters that you have to parse and examine to determine content.

Like Newtonian physics, the above generalisations do not apply to everything and can be argued until the cows come home. In truth, every document has an accessible structure of some kind. For example, a Word document has an author, a size, a date written, etc. It is just that it is far easier to find out who the recipient of an email was than the recipient of a Word document. This is because there is a common and standard ‘Tag’ that tells us who the recipient is of an email and there is no such common and standard tag for a Word document.

In our business we call ‘information about information’ (e.g., the recipient and date fields on an email) Metadata. If an object has recognizable Metadata then it is far easier to process than an object without recognizable Metadata. We may then say that adding Metadata to an object is the same as adding structure.

Adding structure is what we do when we create a Word document using a template or when we add tags to a Word document. We are normalizing the standard information we require in our business processes so the objects we deal with have the structure we require to easily and accurately identify and process them.

This is of course one of the long-standing problems in our industry, we spend far too much time and money trying to parse and interpret unstructured objects when we should be going back to the coal face and adding structure when the object is first created. This is of course relatively easy to do if we are creating the objects (e.g., a Word document) but not easy to achieve if we are receiving documents from foreign sources like our customers, our suppliers or the government. Unless you are the eight-hundred pound gorilla (like Walmart) it is very difficult to force your partners to add the structure you require to make processing as fast and as easy and as accurate as possible.

There have been attempts in the past to come up with common ‘standards’ that would have regulated document structure but none have been successful. The last one was when XML was the bright new kid on the block and the XML industry rushed headlong into defining XML standards for every conceivable industry to facilitate common structures and to make data transfer between different organizations as easy and as standard as possible. The various XML standardisation projects sucked up millions or even billions of dollars but did not produce the desired results; we are still spending billions of dollars each year parsing unstructured documents trying to determine content.

So, back to the original question, what exactly is Enterprise Content Management? The simple answer is that it is the business or process of extracting useful information from objects such as emails and PDFs and Word documents and then using that information in a business process. It is all about the process of capturing Metadata and content in the most accurate and expeditious manner possible so we can automate business processes as much as possible.

If done properly, it makes your job more pleasant and saves your organization money and it makes your customers and suppliers happier. As such it sounds a lot like motherhood (who is going to argue against it?) but it certainly isn’t like manna from heaven. There is always a cost and it is usually significant. As always, you reap what you sow and effort and cost produces rewards.

Is content management something you should consider? The answer is definitely yes with one proviso; please make sure that the benefits are greater than the cost.

 

Month List