Year: 2008 (page 1 of 4)

Enterprise Search 2008 Wrap-Up

It would be presumptuous to think that I could adequately summarize a very active year of evolution among a huge inventory of search technologies. This entry is more about what I have learned and what I opine about the state-of-the-market, than an analytical study and forecast.

The weak link in the search market is product selection methods. My first thought is that we are in a state of technological riches without clear guideposts for which search models work best in any given enterprise. Those tasked to select and purchase products are not well-educated about the marketplace but are usually not given budget or latitude to purchase expert analysis when it is available. It is a sad commentary to view how organizations grant travel budgets to attend conferences where only limited information can be gathered about products but will not spend a few hundred dollars on in-depth comparative expert analyses of a large array of products.

My sources for this observation are numerous, confirmed by speakers in our Gilbane conference search track sessions in Boston and San Francisco. As they related their personal case histories for selecting products, speakers shared no tales of actually doing literature searches or in-depth research using resources with a cost associated. This underscores another observation, those procuring search do not know how to search and operate in the belief that they can find “good enough” information using only “free stuff.” Even their review of material gathered is limited to skimming rather than a systematic reading for concrete facts. This does not make for well-reasoned selections. As noted in an earlier entry, a widely published chart stating that product X is a leader does nothing to enlighten your enterprise’s search for search. In one case, product leadership is determined primarily by the total software sales for the “leader” of which search is a miniscule portion.

Don’t expect satisfaction with search products to rise until buyers develop smarter methods for selection and better criteria for making a buy decision that suits a particular business need.

Random Thoughts. It will be a very long time before we see a universally useful, generic search function embedded in Microsoft (MS) product suites as a result of the FAST acquisition. Asked earlier in the year by a major news organization whether I though MS had paid too much for FAST, I responded “no” if what they wanted was market recognition but “yes” if they thought they were getting state-of-the-art-technology. My position holds; the financial and legal mess in Norway only complicates the road to meshing search technology from FAST with Microsoft customer needs.

I’ve wondered what has happened to the OmniFind suite of search offerings from IBM. One source tells me it makes IBM money because none of the various search products in the line-up are standalone, nor do they provide an easy transition path from one level of product to another for upward scaling and enhancements. IBM can embed any search product with any bundled platform of other options and charge for lots of services to bring it on-line with heavy customization.

Three platform vendors seem to be penetrating the market slowly but steadily by offering more cohesive solutions to retrieval. Native search solutions are bundled with complete content capture, publishing and search suites, purposed for various vertical and horizontal applications. These are Oracle, EMC, and OpenText. None of these are out-of-the-box offerings and their approach tends to appeal to larger organizations with staff for administration. At least they recognize the scope and scale of enterprise content and search demands, and customer needs.

On User Presentations at the Boston Gilbane Conference, I was very pleased with all sessions, the work and thought the speakers put into their talks. There were some noteworthy comments in those on Semantic Search and Text Technologies, Open Source and Search Appliances.

On the topic of semantic (contextual query and retrieval) search, text mining and analytics, the speakers covered the range of complexities in text retrieval, leaving the audience with a better understanding of how diverse this domain has become. Different software application solutions need to be employed based on point business problems to be solved. This will not change, and enterprises will need to discriminate about which aspects of their businesses need some form of semantically enabled retrieval and then match expectations to offerings. Large organizations will procure a number of solutions, all worthy and useful. Jeff Catlin of Lexalytics gave a clear set of definitions within this discipline, industry analyst Curt Monash provoked us with where to set expectations for various applications, and Win Carus of Information Extraction Systems illustrated the tasks extraction tools can perform to find meaning in a heap of content. The story has yet to be written on how semantic search is and will impact our use of information within organizations.

Leslie Owens of Forrester and Sid Probstein of Attivio helped to ground the discussion of when and why open source software is appropriate. The major take-way for me was an understanding of the type of organization that benefits most as a contributor and user of open source software. Simply put, you need to be heavily vested and engaged on the technical side to get out of open source what you need, to mold it to your purpose. If you do not have the developers to tackle coding, or the desire to share in a community of development, your enterprise’s expectations will not be met and disappointment is sure to follow.

Finally, several lively discussions about search appliance adoption and application (Google Search Appliance and Thunderstone) strengthen my case for doing homework and making expenditures on careful evaluations before jumping into procurement. While all the speakers seem to be making positive headway with their selected solutions, the path to success has involved more diversions and changes of course than necessary for some because the vetting and selecting process was too “quick and dirty” or dependent on too few information sources. This was revealed: true plug and play is an appliance myth.

What will 2009 bring? I’m looking forward to seeing more applications of products that interest me from companies that have impressed me with thoughtful and realistic approaches to their customers and target audiences. Here is an uncommon clustering of search products.

Multi-repository search across database applications, content collaboration stores document management systems and file shares: Coveo, Autonomy, Dieselpoint, dtSearch, Endeca, Exalead, Funnelback, Intellisearch, ISYS, Oracle, Polyspot, Recommind, Thunderstone, Vivisimo, and X1. In this list is something for every type of enterprise and budget.

Business and analytics focused software with intelligence gathering search: Attensity, Attivio, Basis Technology, ChartSearch, Lexalytics, SAS, and Temis.

Comprehensive solutions for capture, storage, metadata management and search for high quality management of content for targeted audiences: Access Innovations, Cuadra Associates, Inmagic, InQuira, Knova, Nstein, OpenText, ZyLAB.

Search engines with advanced semantic processing or natural language processing for high quality, contextually relevant retrieval when quantity of content makes human metadata indexing prohibitive: Cognition Technologies, Connotate, Expert System, Linguamatics, Semantra, and Sinequa

Content Classifier, thesaurus management, metadata server products have interplay with other search engines and a few have impressed me with their vision and thoughtful approach to the technologies: MarkLogic, MultiTes, Nstein, Schemalogic, Seaglex, and Siderean.

Search with a principal focus on SharePoint repositories: BA-Insight, Interse, Kroll Ontrack, and SurfRay.

Finally, some unique search applications are making serious inroads. These include Documill for visual and image, Eyealike for image and people, Krugle for source code, and Paglo for IT infrastructure search.

This is the list of companies that interest me because I think they are on track to provide good value and technology, many still small but with promise. As always, the proof will be in how they grow and how well they treat their customers.

That’s it for a wrap on Year 2 of the Enterprise Search Practice at the Gilbane Group. Check out our search studies at http://gilbane.com/Research-Reports.html and PLEASE let me hear your thoughts on my thoughts or any other search related topic via the contact information at http://gilbane.com/

Case Studies and Guidance for Search Implementations

We’ll be covering a chunk of the search landscape at the Gilbane Conference next week. While there are nominally over 100 search solutions that target some aspect of enterprise search, there will be plenty to learn from the dozen or so case studies and tool options described. Commentary and examples include: Attivio, Coveo, Exalead, Google Search Appliance (GSA), IntelliSearch, Lexalytics, Lucene, Oracle Secure Enterprise Search, Thunderstone and references to others. Our speakers will cue us into the current state of the search as it is being implemented. Several exhibitors are also on site to demonstrate their capabilities and they represent some of the best. Check out the program lineup below and try to make it to Boston to hear those with hands-on experience.

EST-1: Plug-and Play: Enterprise Experiences with Search Appliances

  • So you want to implement an enterprise search solution? Speaker: Angela A. Foster, FedEx Services, FedEx.com Development, and Dennis Shirokov, Marketing Manager, FedEx Digital Access Marketing.
  • The Make or Buy Decision at the U.S. General Services Admin. Speaker: Thomas Schaefer, Systems Analyst and Consultant, U.S. General Services Administration
  • Process and Architecture for Implementing GSA at MITRE. Robert Joachim, Info Systems Engr, Lead, The MITRE Corporation.

EST-2: Search in the Enterprise When SharePoint is in the Mix

  • Enterprise Report Management: Bringing High Value Content into the Flow of Business Action. Speaker: Ajay Kapur, VP of Product Development, Apps Associates
  • Content Supply? Meet Knowledge Demand: Coveo SharePoint integration. Speaker: Marc Solomon, Knowledge Planner, PRTM.
  • In Search of the Perfect Search: Google Search on the Intranet. Speaker: June Nugent, Director of Corporate Knowledge Resources, NetScout Systems,

EST-3: Open Source Search Applied in the Enterprise

  • Context for Open Source Implementations. Speaker: Leslie Owen, Analyst, Forrester Research
  • Intelligent Integration: Combining Search and BI Capabilities for Unified Information Access. Speaker: Sid Probstien, CTO, Attivio

EST-4: Search Systems: Care and Feeding for Optimal Results

  • Getting Off to a Strong Start with Your Search Taxonomy. Speaker: Heather Hedden, Principal Hedden Information Management
  • Getting the Puzzle Pieces to Fit; Finding the Right Search Solution(s) Patricia Eagan, Sr. Mgr, Web Communications, The Jackson Laboratory.
  • How Organizations Need to Think About Search. Speaker: Rob Wiesenberg, President & Founder, Contegra Systems

EST-5: Text Analytics/Semantic Search: Parsing the Language

  • Overview and Differentiators: Text Analytics, Text Mining and Semantic Technologies. Jeff Catlin, CEO, Lexalytics
  • Reality and Hype in the Text Retrieval Market. Curt Monash, President, Monash Research.
  • Two Linguistic Approaches to Search: Natural Language Processing and Concept Extraction. Speaker: Win Carus, President and Founder, Information Extraction Systems

Exhibitors with a Search Focus:

Enterprise Search is Everywhere

When you look for an e-mail you sent last week, a vendor account rep’s phone number, a PowerPoint presentation you received from a colleague in the Paris office, a URL to an article recommended for reading before the next Board meeting, or background on a company project you have been asked to manage, you are engaged in search in, about, or for your enterprise. Whether you are working inside applications that you have used for years, or simply perusing the links on a decade’s old corporate intranet, trying to find something when you are in the enterprise doing its work, you are engaging with a search interface.
Dissatisfaction comes from the numbers of these interfaces and the lack of cohesive roadmap to all there is to be found. You already know what you know and what you need to know. Sometimes you know how to find what you need to know but more often you don’t know and stumble through a variety of possibilities up to and including asking someone else how to find it. That missing roadmap is more than an annoyance; it is a major encumbrance to doing your job and top management does not get it. They simply won’t accept that one or two content roadmap experts (overhead) could be saving many people-years of company time and lost productivity.
In most cases, the simple notion of creating clear guidelines and signposts to enterprise content is a funding showstopper. It takes human intelligence to design and build that roadmap and put the technology aids in place to reveal it. Management will fund technology but not the content architects, knowledge “mappers” and ongoing gatekeepers to stay on top of organizational change, expansions, contractions, mergers, rule changes and program activities that evolve and shift perpetually. They don’t want infrastructure overhead whose primary focus, day-in and day-out, will be observing, monitoring, communicating, and thinking about how to serve up the information that other workers need to do their jobs. These people need to be in place as the “black-boxes” that keep search tools in tip-top operating form.
Last week I commented on the products that will be featured in the Search Track at Gilbane Boston, Dec. 3rd and 4th. What you will learn about these tools is going to be couched in case studies that reveal the ways in which search technology is leveraged by people who think a lot about what needs to be found and how search needs to work in their enterprises. They will talk about what tools they use, why and what they are doing to get search to do its job. I’ve asked the speakers to tell their stories and based on my conversations with them in the past week, that is what we will hear, the reality!

In the Field: The Enterprise Search Market Offers CHOICES

Heading into the Gilbane Boston conference next month we have case studies that feature quite an array of enterprise search applications. So many of the search solutions now being deployed are implemented with a small or part-time staff that it is difficult to find the one or two people who can attend a conference to tell their stories. We have surveyed blogs, articles and case studies published elsewhere to identify organizations and people who have hands-on-experience in the trenches deploying search engines in their enterprises. Our speakers are those who were pleased to be invited and they will be sharing their experiences on December 3rd and 4th.

From search appliances Thunderstone and Google Search Appliance, to platform search solutions based on Oracle Secure Enterprise Search, and standalone search products Coveo, Exalead, and ISYS, we will hear from those who have been involved in selecting, implementing and deploying these solutions for enterprise use. From a Forrester industry analyst and Attivio developer we’ll hear about open source options and how they are influencing enterprise search development. The search sessions will be rounded out as we explore the influences and mergers of text mining, text analytics with Monash Research and semantic technologies (Lexalytics and InfoExtract) as they relate to other enterprise search options. There will be something for everyone in the sessions and in the exhibit hall.

Personally, I am hoping to see many in the audience who also have search stories within their own enterprises. Those who know me will attest to my strong belief in communities of practice and sharing. It strengthens the marketplace place when people from different types of organizations share their experiences trying to solve similar problems with different products. Revealing competitive differentiators among the numerous search products is something that pushes technology envelopes and makes for a more robust marketplace. Encouraging dialogue about products and in-the-field experiences is a priority for all sessions at the Gilbane Conference and I’ll be there to prompt discussion for all five search sessions. I hope you’ll join me in Boston.

Apples and Orangutans: Enterprise Search and Knowledge Management

This title by Mike Altendorf, in CIO Magazine, October 31, 2008, mystifies me, Search Will Outshine KM. I did a little poking around to discover who he is and found a similar statement by him back in September, Search is being implemented in enterprises as the new knowledge management and what’s coming down the line is the ability to mine the huge amount of untapped structured and unstructured data in the organisation.

Because I follow enterprise search for the Gilbane Group while maintaining a separate consulting practice in knowledge management, I am struggling with his conflation of the two terms or even the migration of one to the other. The search we talk about is a set of software technologies that retrieve content. I’m tired of the debate about the terminology “enterprise search” vs. “behind the firewall search.” I tell vendors and buyers that my focus is on software products supporting search executed within (or from outside looking in) the enterprise on content that originates from within the enterprise or that is collected by the enterprise. I don’t judge whether the product is for an exclusive domain, content type or audience, or whether it is deployed with the “intent” of finding and retrieving every last scrap of content lying around the enterprise. It never does nor will do the latter but if that is what an enterprise aspires to, theirs is a judgment call I might help them re-evaluate in consultation.

It is pretty clear that Mr. Altendorf is impressed with the potential for Fast and Microsoft so he knows they are firmly entrenched in the software business. But knowledge management (KM) is not now, nor has it ever been, a software product or even a suite of products. I will acknowledge that KM is a messy thing to talk about and the label means many things even to those of us who focus on it as a practice area. It clearly got derailed as a useful “discipline” of focus in the 90s when tool vendors decided to place their products into a new category called “knowledge management.”

It sounded so promising and useful, this idea of KM software that could just suck the brains out of experts and the business know-how of enterprises out of hidden and lurking content. We know better, we who try to refine the art of leveraging knowledge by assisting our clients with blending people and technology to establish workable business practices around knowledge assets. We bring together IT, business managers, librarians, content managers, taxonomists, archivists, and records managers to facilitate good communication among many types of stakeholders. We work to define how to apply behavioral business practices and tools to business problems. Understanding how a software product is helpful in processes, its potential applications, or to encourage usability standards are part of the knowledge manager’s toolkit. It is quite an art, the KM process of bringing tools together with knowledge assets (people and content) into a productive balance.

Search is one of the tools that can facilitate leveraging knowledge assets and help us find the experts who might share some “how-to” knowledge, but it is not, nor will it ever be a substitute for KM. You can check out these links to see how others line up on the definitions of KM: CIO introduction to KM and Wikipedia. Let’s not have the “KM is dead” discussion again!

What Determines a Leader in the Enterprise Search Market?

Let’s agree that most if not all “enterprise search” is really about point solutions within large corporations. As I have written elsewhere, the “enterprise” is almost always a federation of constituencies, each with their own solutions for content applications and that includes search. If there is any place that we find truly enterprise-wide application of search, it is in small and medium organizations (SMBs). This would include professional service firms (consultancies and law firms), NGOs, many non-profits, and young R&D companies. There are plenty of niche solutions for SMBs and they are growing.

I bring this up because the latest Gartner “magic quadrant” lists Microsoft (MS) as the “leader” in enterprise search; this is the same place Gartner has positioned Fast Search & Transfer in the past. Whether this is because Fast’s assets are now owned by MS or because Gartner really believes that Microsoft is the leader, I still beg to strongly differ.

I have been perplexed by the Microsoft/Fast deal since it was announced earlier this year because, although Fast has always offered a lot of search technology, I never found it to be a compelling solutions for any of my clients. Putting aside the huge upfront capital cost for licenses, the staggering amount of development work, and time to deployment there were other concerns. I sensed a questionable commitment to an on-going, sustainable, unified and consistent product vision with supporting services. I felt that any client of mine would need very deep pockets indeed to really make a solid value case for Fast. Most of my clients are already burned out on really big enterprise deployments of applications in the ERP and CRM space, and understand the wisdom of beginning with smaller value-achievable, short-term projects on which they can build.

Products that impress me as having much more “out-of-the-box” at a more reasonable cost are clearly leaders in their unique domains. They have important clients achieving a good deal of benefit at a reasonable cost, in a short period of time. They have products that can be installed, implemented and maintained internally without a large staff of administrators, and they have good reputations among their clients for responsiveness and a cohesive series of roll-outs. Several have as many or more clients than Fast ever had (if we ever know the real number). Coveo, Exalead, ISYS, Recommind, Vivisimo, and X1 are a few of a select group that are marking a mark in their respective niches, as products ready for action with a short implementation cycle (weeks or months not years).

Autonomy and Endeca continue to bring value to very large projects in large companies but are not plug-and-play solutions, by any means. Oracle, IBM, and Microsoft offer search solutions of a very different type with a heavy vendor or third-party service requirement. Google Search Appliance has a much larger installed base than any of these but needs serious tuning and customization to make it suitable to enterprise needs. Take the “leadership” designation with a big grain of salt because what leads on the charts may be exactly what bogs you down. There are no generic, one-suit-fits-all enterprise search solutions including those in the “leaders” quadrant.

Dewey Decimal Classification, Categorization, and NLP

I am surprised how often various content organizing mechanisms on the Web are compared to the Dewey Decimal System. As a former librarian, I am disheartened to be reminded how often students were lectured on the Dewey Decimal system, apparently to the exclusion of learning about subject categorization schemes. They complemented each other but that seems to be a secret among all but librarians.

I’ll try to share a clearer view of the model and explain why new systems of organizing content in enterprise search are quite different than the decimal model.

Classification is a good generic term for defining physical organizing systems. Unique animals and plants are distinguished by a single classification in the biological naming system. So too are books in a library. There are two principal classification systems for arranging books on the shelf in Western libraries: Dewey Decimal and Library of Congress (LC). They each use coding (numeric for Dewey decimal and alpha-numeric for Library of Congress) to establish where a book belongs logically on a shelf, relative to other books in the collection, according to the book’s most prominent content topic. A book on nutrition for better health might be given a classification number for some aspect of nutrition or one for a health topic, but a human being has to make a judgment which topic the book is most “about” because the book can only live in one section of the collection. It is probably worth mentioning that the Dewey and LC systems are both hierarchical but with different priorities. (e.g. Dewey puts broad topics like Religion and Philosophy and Psychology at top levels and LC puts those two topics together while including more scientific and technical topics at the top of the list, like Agriculture and Military Science.)

So why classify books to reside in topic order? It requires a lot of labor to move the collections around to make space for new books. It is for the benefit of the users, to enable “browsing” through the collection, although it may be hard to accept that the term browsing was a staple of library science decades before the internet. Library leaders established eons ago the need for a system of physical organization to help readers peruse the book collection by topic, leading from the general to the specific.

You might ask what kind of help that was for finding the book on nutrition that was classified under “health science.” This is where another system, largely hidden from the public or often made annoyingly inaccessible, comes in. It is a system of categorization in which any content, book or otherwise, can be assigned an unlimited number of categories. Wondering through the stacks, one would never suspect this secret way of finding a nugget in a book about your favorite hobby if that book was classified to live elsewhere. The standard lists of terms for further describing books by multiple headings are called “subject headings” and you had to use a library catalog to find them. Unfortunately, they contain mysterious conventions called “sub-divisions,” designed to pre-coordinate any topic with other generic topics (e.g. Handbooks, etc. and United States). Today we would call these generic subdivision terms, facets. One reflects a kind of book and the other reveals a geographical scope covered by the book.

With the marvel of the Web page, hyperlinking, and “clicking through” hierarchical lists of topics we can click a mouse to narrow a search for handbooks on nutrition in the United States for better health beginning at any facet or topic and still come up with the book that meets all four criteria. We no longer have to be constrained by the Dewey model of browsing the physical location of our favorite topics, probably missing a lot of good stuff. But then we never did. The subject card catalog gave us a tool for finding more than we would by classification code alone. But even that was a lot more tedious than navigating easily through a hierarchy of subject headings, narrowing the results by facets on a browser tab and further narrowing the results by yet another topical term until we find just the right piece of content.

Taking the next leap we have natural language processing (NLP) that will answer the question, “Where do I find handbooks on nutrition in the United States for better health?” And that is the Holy Grail for search technology – and a long way from Mr. Dewey’s idea for browsing the collection.

Taxonomy, Yes, but for What?

The term taxonomy crept into the search lexicon by stealth and is now firmly entrenched. The very early search engines, circa 1972-73, presented searchers with the retrieval option of selecting content using controlled vocabularies from a standardized thesaurus of terminology in a particular discipline. With no neat graphical navigation tools, searches were crafted on a typewriter-like device, painfully typed in an arcane syntax. A stray hyphen, period or space would render the query un-computable, so after deciphering the error message, the searcher would try again. Each minute and each result cost money, so errors were a real expense.

We entered the Web search era bundling content into a directory structure, like the “Yellow Pages,” or organizing query results into “folders” labeled with broad topics. The controlled vocabulary that represented directory topics or folder labels became known as a taxonomic structure, with the early ones at NorthernLight and Yahoo crafted by experts with knowledge of the rules of controlled vocabulary, thesaurus development and maintenance. Google derailed that search model with its simple “search box” requiring only a word or phrase to grab heaps of results. Today we are in a new era. Some people like searching by typing keywords in a box, while others prefer the suggestions of a directory or tree structure. Building taxonomic structures for more than e-commerce sites is now serious business for searches within enterprises where many employees prefer to navigate through the terminology to browse and discover the full scope of what is there.

Taxonomies for navigation are but one purpose for them to be used in search. Depending on the application domain, richness of the subject matter, scope and depth of topics, these lists can become quite large and complex. The more cross-references (e.g. cell phones USE wireless phones) are embedded in the list, the more likely the searcher’s preferred term will be present. There is a diminishing return, however; if the user has to navigate to a system’s preferred term too often; the entire process of searching becomes unwieldy and abandoned. On the other hand, if the system automates the smooth transition from one term to another, the richness and complexity of a taxonomy can be an asset.

In more sophisticated applications of taxonomies, the thesaurus model of relationships becomes a necessity. When a search engine, has embedded algorithms that can interpret explicit term relationships, it indexes content according to a taxonomy and all its cross-references. Taxonomy here informs the index engine. It requires substantial maintenance and governance of a much more granular nature than for navigation. To work well, a large corpus of terminology needs to be built to assure that what the content says and means, and what the searcher expects are a match in results. If the results of a search give back unsatisfactory results due to a poor taxonomy, trust in the search system fails rapidly and the benefits of whatever effort was put into building a taxonomy are lost.

I bring this up because the intent of any taxonomy is the first step in deciding whether to start building one. Either model is an on-going commitment but the latter is a much larger investment in sophisticated human resources. The conditions that must be met to have any taxonomy succeed must be articulated in selling the project and value proposition.

Controlling Your Enterprise Search Application

When interviewing search administrators who had also been part of product selection earlier this year, I asked about surprises they had encountered. Some involved the selection process but most related to on-going maintenance and support. None commented on actual failures to retrieve content appropriately. That is a good thing whether it was because, during due diligence they had already tested for that during a proof of concept or because they were lucky.
Thinking about how product selections are made, prompts me to comment on a two major search product attributes that control the success or failure of search for an enterprise. One is the actual algorithms that control content indexing, what is indexed and how it is retrieved from the index (or indices). The second is the interfaces, interfaces for the population of searchers to execute selections, and interfaces for results presentation. On each aspect, buyers need to know what they can control and how best to execute it for success.
Indexing and retrieval technology is embedded with search products; the number of administrative options to alter search scalability, indexing and content selection during retrieval is limited to none. The “secret sauce” for each product is largely hidden, although it may have patented aspects available for researching. Until an administrator of a system gets deeply into tuning, and experimenting with significant corpuses of content, it is difficult to assess the net effect of delivered tuning options. The time to make informed evaluations about how well a given product will retrieve your content when searched by your select audience is before a purchase is made. You can’t control the underlying technology but you can perform a proof of concept (PoC). This requires:

  • human resources and a commitment of computing resources
  • well-defined amount, type and nature (metadata plus full-text or full-text unstructured-only) to give a testable sample
  • testers who are representative of all potential searchers
  • a comparison of the results with three to four systems to reveal how well they each retrieve the intended content targets
  • knowledge of the content by testers and similarity of searches to what will be routinely sought by enterprise employees or customers
  • search logs of previously deployed search systems, if they exist. Searches that routinely failed in the past should be used to test newer systems

Interface technology
Unlike the embedded search technology, buyers can exercise design control or hire a third-party to produce search interfaces that vary enormously. Controlling for what searchers experience when they first encounter a search engine, either a search box at a portal or a completely novel variety of search options with search box, navigation options or special search forms is within the control of the enterprise. This may be required if what comes “out-of-the box” as the default is not satisfactory. You may find, at a reasonable price, a terrific search engine that scales well, indexes metadata and full-text competently and retrieves what the audience expects but requires a different look-and-feel for your users. Through an API (application programming interface), SDK (software development kit) or application connectors (e.g. Documentum, SharePoint) numerous customization options are delivered with enterprise search packages or are available as add-ons.
In either case, human resource costs must be added to the bottom line. A large number of mature software companies and start-ups are innovating with both their indexing techniques and interface design technologies. They are benefiting from several decades of search evolution for search experts, and now a decade of search experiences in the general population. Search product evolution is accelerating as knowledge of searcher experiences is leveraged by developers. You may not be able to control emerging and potentially disruptive technologies, but you can still exercise beneficial controls when selecting and implementing most any search system.

Older posts

© 2018 Bluebill Advisors

Theme by Anders NorenUp ↑