Archive for Search Technologies and Products

Search Engines; They’ve Been Around Longer Than You Think

It dates me, as well as search technology, to acknowledge that an article in Information Week by Ken North containing Medlars and Twitter in the title would be meaningful. Discussing search requires context, especially when trying to convince IT folks that special expertise is required to do search really well in the enterprise, and it is not something acquired in computer science courses.

Evolution of search systems from the print indexes of the early 1900s such as Index Medicus (National Library of Medicine’s index to medical literature) and Chemical Abstracts to the advent of the online Medical Literature Analysis and Retrieval System (Medlars) in the 1960s was slow. However, the phases of search technology evolution since the launch of Medlars has hardly been warp speed. This article is highly recommended because it gives historical context to automated search while defining application and technology changes over the past 50 years. The comparison between Medlars and Twitter, as search platforms is fascinating, something that would never have occurred to me to explore.

A key point of the article is the difference between a system of search designed for archival content with deeply hierarchical categorization for a specialized corpus versus a system of highly transient, terse and topically generalized content. Last month I commented on the need to have search present in your normal work applications and this article underscores an enormous range of purpose for search. Information of a short temporal nature and scholarly research each have a place in the enterprise but it would be a stretch to think of searching for both types via a single search interface. Wanting to know what a colleague is observing or learning at a conference is very different than researching the effects of a uranium exposure on the human anatomy.

What have not changed much in the world of applied search technology are the reasons we need to find information and how it becomes accessible. The type of search done in Twitter or on LinkedIn today is for information that we used to pick up from a colleague (in person or on the phone) or in industry daily or weekly news publications. That’s how we found the name of an expert, learned the latest technologies being rolled out at a conference or got breaking news on a new space material being tested. What has changed is the method of retrieval but not by a lot, and the relative efficiency may not be that great. Today, we depend on a lot of pre-processing of information by our friends and professional colleagues to park information where we can pick it up on the spur of the moment – easy for us but someone still spends the time to put it out there where we can grab it.

On the other end of the spectrum is that rich research content that still needs to be codified and revealed to search engines with appropriate terminology so we can pursue in-depth searching to get precisely relevant and comprehensive results. Technology tools are much better at assisting us with content enhancement to get us the right and complete results, but humans still write the rules of indexing and curate the vocabularies needed for classification.

Fifty years is a long time and we are still trying to improve enterprise search. It only takes more human work to make it work better.

Why isn’t Enterprise Search “Mission Critical?”

Why isn’t “search” the logical end-point in any content and information management activity. If we don’t care about being able to find valued and valuable information, why bother with any of the myriad technologies employed to capture, organize, categorize, store, and analyze content. What on earth is the point of having our knowledge workers document the results of their business, science, engineering and marketing endeavors, if we never aspire to having it retrieved, leveraged or re-purposed by others?

However, in Information Week, an article in the September 5, 2011 issue entitled “HP Transformation: Autonomy is a Modest Start” gave me a jolt with this comment: Autonomy has very sophisticated search capabilities including federation–the ability to search across many repositories and sources–and video and image search. But with all that said, enterprise search isn’t a hot, mission-critical business priority. [NOTE: in the print version the “call-out” box had slightly different phrasing but it jumped off the page, anyway.] This is pretty provocative and disappointing to read in the pages of this particular publication.

Over the past few months, I have been engrossed in working on several client projects related to taxonomy development, vocabulary management and integration with content and search systems. There is no doubt that every one of these institutions is focused with laser intensity on getting the search interface to deliver the highest value for the effort and dollars expended. In each case, the project involved a content management component for capturing metadata with solid uniformity, strong vocabulary control, and rich synonym tables for ensuring findability when a search query has different language than the content or metadata. Every step in each of these projects has come back to the acid test, “will the searcher be able to find what s/he is looking for.”

In past posts I have commented on the strength of enterprise search technologies, and the breadth of offerings that cover a wide array of content findability needs and markets. From embedded search (within content management systems, archive and records management systems, museum systems, etc.), to standalone search engines designed to work well in discrete vertical markets or functional areas of enterprises (e.g., engineering, marketing, healthcare, energy exploration) buyers have a wealth of options from which to choose. Companies that have formerly focused on web site management, business intelligence, data mining, and numerous other content related tools are redefining themselves with additional terminology like e-discovery, 360-degree views (of information), content accessibility, and unified information.

Without the search component, all of the other technologies, which have been so hot in the past, are worthless. The article goes on to say that the hottest areas (of software growth) are business analytics and big-data analysis. Neither of these contributes business value without search underpinnings.

So, let’s get off this kick of under-rating and marginalizing search as “not mission critical” and think very seriously about the consequences of trying to run any enterprise without being able to find the products of our intellectual work output.

Collaboration, Convergence and Adoption

Here we are, half way through 2011, and on track for a banner year in the adoption of enterprise search, text mining/text analytics, and their integration with collaborative content platforms. You might ask for evidence; what I can offer is anecdotal observations. Others track industry growth in terms of dollars spent but that makes me leery when, over the past half dozen years, there has been so much disappointment expressed with the failures of legacy software applications to deliver satisfactory results. My antenna tells me we are on the cusp of expectations beginning to match reality as enterprises are finding better ways to select, procure, implement, and deploy applications that meet business needs.

What follows are my happy observations, after attending the 2011 Enterprise Search Summit in New York and 2011 Text Analytics Summit in Boston. Other inputs for me continue to be a varied reading list of information industry publications, business news, vendor press releases and web presentations, and blogs, plus conversations with clients and software vendors. While this blog is normally focused on enterprise search, experiencing and following content management technologies, and system integration tools contribute valuable insights into all applications that contribute to search successes and frustrations.

Collaboration tools and platforms gained early traction in the 1990s as technology offerings to the knowledge management crowd. The idea was that teams and workgroups needed ways to share knowledge through contribution of work products (documents) to “places” for all to view. Document management systems inserted themselves into the landscape for managing the development of work products (creating, editing, collaborative editing, etc.). However, collaboration spaces and document editing and version control activities remained applications more apart than synchronized.

The collaboration space has been redefined largely because SharePoint now dominates current discussions about collaboration platforms and activities. While early collaboration platforms were carefully structured to provide a thoughtfully bounded environment for sharing content, their lack of provision for idiosyncratic and often necessary workflows probably limited market dominance.

SharePoint changed the conversation to one of build-it-to-do-anything-you-want-the way-you-want (BITDAYWTWYW). What IT clearly wants is single vendor architecture that delivers content creation, management, collaboration, and search. What end-users want is workflow efficiency and reliable search results. This introduces another level of collaborative imperative, since the BITDAYWTWYW model requires expertise that few enterprise IT support people carry and fewer end-users would trust to their IT departments. So, third-party developers or software offerings become the collaborative option. SharePoint is not the only collaboration software but, because of its dominance, a large second tier of partner vendors is turning SharePoint adopters on to its potential. Collaboration of this type in the marketplace is ramping wildly.

Convergence of technologies and companies is on the rise, as well. The non-Microsoft platform companies, OpenText, Oracle, and IBM are placing their strategies on tightly integrating their solid cache of acquired mature products. These acquisitions have plugged gaps in text mining, analytics, and vocabulary management areas. Google and Autonomy are also entering this territory although they are still short on the maturity model. The convergence of document management, electronic content management, text and data mining, analytics, e-discovery, a variety of semantic tools, and search technologies are shoring up the “big-platform” vendors to deal with “big-data.”

Sitting on the periphery is the open source movement. It is finding ways to alternatively collaborate with the dominant commercial players, disrupt select application niches (e. g. WCM ), and contribute solutions where neither the SharePoint model nor the big platform, tightly integrated models can win easy adoption. Lucene/Solr is finding acceptance in the government and non-profit sectors but also appeal to SMBs.

All of these factors were actively on display at the two meetings but the most encouraging outcomes that I observed were:

  • Rise in attendance at both meetings
  • More knowledgeable and experienced attendees
  • Significant increase in end-user presentations

The latter brings me back to the adoption issue. Enterprises, which previously sent people to learn about technologies and products to earlier meetings, are now in the implementation and deployment stages. Thus, they are now able to contribute presentations with real experience and commentary about products. Presenters are commenting on adoption issues, usability, governance, successful practices and pitfalls or unresolved issues.

Adoption is what will drive product improvements in the marketplace because experienced adopters are speaking out on their activities. Public presentations of user experiences can and should establish expectations for better tools, better vendor relationship experiences, more collaboration among products and ultimately, reduced complexity in the implementation and deployment of products.

Lucene Open Source Community Commits to a Future in Search

It has been nearly two years since I commented on an article in Information Week, Open Source, Its Time has Come, Nov. 2008. My main point was the need for deep expertise to execute enterprise search really well. I predicted the growth of service companies with that expertise, particularly for open source search. Not long after I announced that, Lucid Imagination was launched, with its focus on building and supporting solutions based on Lucene and, its more turnkey version, Solr.

It has not taken long for Lucid Imagination (LI) to take charge of the Lucene/Solr community of practice (CoP), and to launch its own platform built on Solr, Lucidworks Enterprise. Open source depends on deep and sustained collaboration; LI stepped into the breach to ensure that the hundreds of contributors, users and committers have a forum. I am pretty committed to CoPs myself and know that nurturing a community for the long haul takes dedicated leadership. In this case it is undoubtedly enlightened self-interest that is driving LI. They are poised to become the strongest presence for driving continuous improvements to open source search, with Apache Lucene as the foundation.

Two weeks ago LI hosted Lucene Revolution, the first such conference in the US. It was attended by over 300 in Boston, October 7-8 and I can report that this CoP is vibrant, enthusiastic. Moderated by Steve Arnold, the program ran smoothly and with excellent sessions. Those I attended reflected a respectful exchange of opinions and ideas about tools, methods, practices and priorities. While there were allusions to vigorous debate among committers about priorities for code changes and upgrades, the mood was collaborative in spirit and tinged with humor, always a good way to operate when emotions and convictions are on stage.

From my 12 pages of notes come observations about the three principal categories of sessions:

  1. Discussions, debates and show-cases for significant changes or calls for changes to the code
  2. Case studies based on enterprise search applications and experiences
  3. Case studies based on the use of Lucene and Solr embedded in commercial applications

Since the first category was more technical in nature, I leave the reader with my simplistic conclusions: core Apache Lucene and Solr will continue to evolve in a robust and aggressive progression. There are sufficient committers to make a serious contribution. Many who have decades of search experience are driving the charge and they have cut their teeth on the more difficult problems of implementing enterprise solutions. In announcing Lucidworks Enterprise, LI is clearly bidding to become a new force in the enterprise search market.

New and sustained build-outs of Lucene/Solr will be challenged by developers with ideas for diverging architectures, or “forking” code, on which Eric Gries, LI CEO, commented in the final panel. He predicted that forking will probably be driven by the need to solve specific search problems that current code does not accommodate. This will probably be more of a challenge for the spinoffs than the core Lucene developers, and the difficulty of sustaining separate versions will ultimately fail.

Enterprise search cases reflected those for whom commercial turnkey applications will not or cannot easily be selected; for them open source will make sense. Coming from LI’s counterpart in the Linux world, Red Hat, are these earlier observations about why enterprises should seek to embrace open source solutions, in short the sorry state of quality assurance and code control in commercial products. Add to that the cost of services to install, implement and customize commercial search products. The argument would be to go with open source for many institutions when there is an imperative or call for major customization.

This appears to be the case for two types of enterprises that were featured on the program: educational institutions and government agencies. Both have procurement issues when it comes to making large capital expenditures. For them it is easier to begin with something free, like open source software, then make incremental improvements and customize over time. Labor and services are cost variables that can be distributed more creatively using multiple funding options. Featured on the program were the Smithsonian, Adhere Solutions doing systems integration work for a number of government agencies, MITRE (a federally funded research laboratory), U. of Michigan, and Yale. CISCO also presented, a noteworthy commercial enterprise putting Lucene/Solr to work.

The third category of presenters was, by far, the largest contingent of open source search adopters, producers of applications that leverage Lucene and Solr (and other open source software) into their offerings. They are solidly entrenched because they are diligent committers, and share in this community of like-minded practitioners who serve as an extended enterprise of technical resources that keeps their overhead low. I can imagine the attractiveness of a lean business that can run with an open source foundation, and operates in a highly agile mode. This must be enticing and exciting for developers who wilt at the idea of working in a constrained environment with layers of management and political maneuvering.

Among the companies building applications on Lucene that presented were: Access Innovations, Twitter, LinkedIn, Acquia, RivetLogic and Salesforce.com. These stand out as relatively mature adopters with traction in the marketplace. There were also companies present that contribute their value through Lucene/Solr partnerships in which their products or tools are complementary including: Basis Technology, Documill, and Loggly.

Links to presentations by organizations mentioned above will take you to conference highlights. Some will appeal to the technical reader for there was a lot of code sharing and technical tips in the slides. The diversity and scale of applications that are being supported by Lucene and Solr was impressive. Lucid Imagination and the speakers did a great job of illustrating why and how open source has a serious future in enterprise search. This was a confidence building exercise for the community.

Two sentiments at the end summed it up for me. On the technical front Eric Gries observed that it is usually clear what needs to be core (to the code) and what does not belong. Then there is a lot of gray area, and that will contribute to constant debate in the community. For the user community, Charlie Hull, of flax opined that customers don’t care whether (the code) is in the open source core or in the special “secret sauce” application, as long as the product does what they want.

Leveraging Two Decades of Computational Linguistics for Semantic Search

Over the past three months I have had the pleasure of speaking with Kathleen Dahlgren, founder of Cognition, several times. I first learned about Cognition at the Boston Infonortics Search Engines meeting in 2009. That introduction led me to a closer look several months later when researching auto-categorization software. I was impressed with the comprehensive English language semantic net they had doggedly built over a 20+ year period.
A semantic net is a map of language that explicitly defines the many relationships among words and phrases. It might be very simple to illustrate something as fundamental as a small geographical locale and all named entities within it, or as complex as the entire base language of English with every concept mapped to illustrate all the ways that any one term is related to other terms, as illustrated in this tiny subset. Dr. Dahlgren and her team are among the few companies that have created a comprehensive semantic net for English.

In 2003, Dr. Dahlgren established Cognition as a software company to commercialize its semantic net, designing software to apply it to semantic search applications. As the Gilbane Group launched its new research on Semantic Software Technologies, Cognition signed on as a study co-sponsor and we engaged in several discussions with them that rounded out their history in this new marketplace. It was illustrative of pioneering in any new software domain.

Early adopters are key contributors to any software development. It is notable that Cognition has attracted experts in fields as diverse as medical research, legal e-discovery and Web semantic search. This gives the company valuable feedback for their commercial development. In any highly technical discipline, it is challenging and exciting to finding subject experts knowledgeable enough to contribute to product evolution and Cognition is learning from client experts where the best opportunities for growth lie.

Recent interviews with Cognition executives, and those of other sponsors, gave me the opportunity to get their reactions to my conclusions about this industry. These were the more interesting thoughts that came from Cognition after they had reviewed the Gilbane report:

  • Feedback from current clients and attendees at 2010 conferences, where Dr. Dahlgren was a featured speaker, confirms escalating awareness of the field; she feels that “This is the year of Semantics.” It is catching the imagination of IT folks who understand the diverse and important business problems to which semantic technology can be applied.
  • In addition to a significant upswing in semantics applied in life sciences, publishing, law and energy, Cognition sees specific opportunities for growth in risk assessment and risk management. Using semantics to detect signals, content salience, and measures of relevance are critical where the quantity of data and textual content is too voluminous for human filtering. There is not much evidence that financial services, banking and insurance are embracing semantic technologies yet, but it could dramatically improve their business intelligence and Cognition is well positioned to give support to leverage their already tested tools.
  • Enterprise semantic search will begin to overcome the poor reputation that traditional “string search” has suffered. There is growing recognition among IT professionals that in the enterprise 80% of the queries are unique; these cannot be interpreted based on popularity or social commentary. Determining relevance or accuracy of retrieved results depends on the types of software algorithms that apply computational linguistics, not pattern matching or statistical models.

In Dr. Dahlgren’s view, there is no question that a team approach to deploying semantic enterprise search is required. This means that IT professionals will work side-by-side with subject matter experts, search experts and vocabulary specialists to gain the best advantage from semantic search engines.

The unique language aspects of an enterprise content domain are as important as the software a company employs. The Cognition baseline semantic net, out-of-the-box, will always give reliable and better results than traditional string search engines. However, it gives top performance when enhanced with enterprise language, embedding all the ways that subject experts talk about their topical domain, jargon, acronyms, code phrases, etc.

With elements of its software already embedded in some notable commercial applications like Bing, Cognition is positioned for delivering excellent semantic search for an enterprise. They are taking on opportunities in areas like risk management that have been slow to adopt semantic tools. They will deliver software to these customers together with services and expertise to coach their clients through the implementation, deployment and maintenance essential to successful use. The enthusiasm expressed to me by Kathleen Dahlgren about semantics confirms what I also heard from Cognition clients. They are confident that the technology coupled with thoughtful guidance from their support services will be the true value-added for any enterprise semantic search application using Cognition.

The free download of the Gilbane study and deep-dive on Cognition was announced on their Web site at this page.

Semantically Focused and Building on a Successful Customer Base

Dr. Phil Hastings and Dr. David Milward spoke with me in June, 2010, as I was completing the Gilbane report, Semantic Software Technologies: A Landscape of High Value Applications for the Enterprise. My interest in a conversation was stimulated by several months of discussions with customers of numerous semantic software companies. Having heard perspectives from early adopters of Linguamatics’ I2E and other semantic software applications, I wanted to get some comments from two key officers of Linguamatics about what I heard from the field. Dr. Milward is a founder and CTO, and Dr. Hastings is the Director of Business Development.
A company with sustained profitability for nearly ten years in the enterprise semantic market space has credibility. Reactions from a maturing company to what users have to say are interesting and carry weight in any industry. My lines of inquiry and the commentary from the Linguamatics officers centered around their own view of the market and adoption experiences.
When asked about growth potential for the company outside of pharmaceuticals where Linguamatics already has high adoption and very enthusiastic users, Drs. Milward and Hastings asserted their ongoing principal focus in life sciences. They see a lot more potential in this market space, largely because of the vast amounts of unstructured content being generated, coupled with the very high-value problems that can be solved by text mining and semantically analyzing the data from those documents. Expanding their business further in the life sciences means that they will continue engaging in research projects with the academic community. It also means that Linguamatics semantic technology will be helping organizations solve problems related to healthcare and homeland security.
The wisdom of a measured and consistent approach comes through strongly when speaking with Linguamatics executives. They are highly focused and cite the pitfalls of trying to “do everything at once,” which would be the case if they were to pursue all markets overburdened with tons of unstructured content. While pharmaceutical terminology, a critical component of I2E, is complex and extensive, there are many aids to support it. The language of life sciences is in a constant state of being enriched through refinements to published thesauri and ontologies. However, in other industries with less technical language, Linguamatics can still provide important support to analyze content in the detection of signals and patterns of importance to intelligence and planning.
Much of the remainder of the interview centered on what I refer to as the “team competencies” of individuals who identify the need for any semantic software application; those are the people who select, implement and maintain it. When asked if this presents a challenge for Linguamatics or the market in general, Milward and Hastings acknowledged a learning curve and the need for a larger pool of experts for adoption. This is a professional growth opportunity for informatics and library science people. These professionals are often the first group to identify Linguamatics as a potential solutions provider for semantically challenging problems, leading business stakeholders to the company. They are also good advocates for selling the concept to management and explaining the strong benefits of semantic technology when it is applied to elicit value from otherwise under-leveraged content.
One Linguamatics core operating principal came through clearly when talking about the personnel issues of using I2E, which is the necessity of working closely with their customers. This means making sure that expectations about system requirements are correct, examples of deployments and “what the footprint might look like” are given, and best practices for implementations are shared. They want to be sure that their customers have a sense of being in a community of adopters and are not alone in the use of this pioneering technology. Building and sustaining close customer relationships is very important to Linguamatics, and that means an emphasis on services co-equally with selling licenses.
Linguamatics has come a long way since 2001. Besides a steady effort to improve and enhance their technology through regular product releases of I2E, there have been a lot of “show me” and “prove it” moments to which they have responded. Now, as confidence in and understanding of the technology ramps up, they are getting more complex and sophisticated questions from their customers and prospects. This is the exciting part as they are able to sell I2E’s ability to “synthesize new information from millions of sources in ways that humans cannot.” This is done by using the technology to keep track of and processing the voluminous connections among information resources that exceed human mental limits.
At this stage of growth, with early successes and excellent customer adoption, it was encouraging to hear the enthusiasm of two executives for the evolution of the industry and their opportunities in it.
The Gilbane report and a deep dive on Linguamatics are available through this Press Release on their Web site.

Semantic Technology: Sharing a Large Market Space

It is always interesting to talk shop with the experts in a new technology arena. My interview with Luca Scagliarini, VP of Strategy and Business Development for Expert System, and Brooke Aker, CEO of Expert System USA was no exception. They had been digesting my research on Semantic Software Technologies and last week we had a discussion about what is in the Gilbane report.

When asked if they were surprised by anything in my coverage of the market, the simple answer was “not really, nothing we did not already know.” The longer answer related to the presentation of our research illustrating the scope and depth of the marketplace. These two veterans of the semantic industry admitted that the number of players, applications and breadth of semantic software categories is impressive when viewed in one report. Mr. Scagliarini commented on the huge amount of potential still to be explored by vendors and users.

Our conversation then focused on where we think the industry is headed and they emphasized that this is still an early stage and evolving area. Both acknowledged the need for simplification of products to ease their adoption. It must be straightforward for buyers to understand what they are licensing, the value they can expect for the price they pay; implementation, packaging and complementary services need to be easily understood.
Along the lines of simplicity, they emphasized the specialized nature of most of the successful semantic software applications, noting that these are not coming from the largest software companies. State-of-the-art tools are being commercialized and deployed for highly refined applications out of companies with a small footprint of experienced experts.

Expert System knows about the need for expertise in such areas as ontologies, search, and computational linguistic applications. For years they have been cultivating a team of people for their development and support operations. It has not always been easy to find these competencies, especially right out of academia. Aker and Scagliarini pointed out the need for a lot of pragmatism, coupled with subject expertise, to apply semantic tools for optimal business outcomes. It was hard in the early years for them to find people who could leverage their academic research experiences for a corporate mission.

Human resource barriers have eased in recent years as younger people who have grown up with a variety of computing technologies seem to grasp and understand the potential for semantic software tools more quickly.
Expert System itself is gaining traction in large enterprises that have segmented groups within IT that are dedicated to “learning” applications, and formalized ways of experimenting with, testing and evaluating new technologies. When they become experts in tool use, they are much better at proving value and making the right decisions about how and when to apply the software.

Having made good strides in energy, life sciences, manufacturing and homeland security vertical markets, Expert System is expanding its presence with the Cogito product line in other government agencies and publishing. The executives reminded me that they have semantic nets built out in Italian, Arabic and German, as well as English. This is unique among the community of semantic search companies and will position them for some interesting opportunities where other companies cannot perform.

I enjoyed listening and exchanging commentary about the semantic software technology field. However, Expert System and Gilbane both know that the semantic space is complex and they are sharing a varied landscape with a lot of companies competing for a strong position in a young industry. They have a significant share already.
For more about Expert System and the release of this sponsored research you can view their recent Press Release.

Weighing In On The Search Industry With The Enterprise In Mind

Two excellent postings by executives in the search industry give depth to the importance of Dassault Système’s acquisition of Exalead. If this were simply a ho-hum failure in a very crowded marketplace, Dave Kellogg of Mark Logic Corporation and Jean Ferré of Sinequa would not care. Instead they are picking up important signals. Industry segments as important as search evolve and its appropriate applications in enterprises are still being discovered and proven. Search may change, as could the label, but whatever it is called it is still something that will be done in enterprises.
This analyst has praise for the industry players who continue to persevere, working to get the packaging, usability, usefulness and business purposes positioned effectively. Jean Ferré is absolutely correct; the nature of the deal underscores the importance of the industry and the vision of the acquirers.
As we segue from a number of conferences featuring search (Search Engines, Enterprise Search Summit, Gilbane) to broader enterprise technologies (Enterprise 2.0) and semantic technologies (SemTech), it is important for enterprises to examine the interplay among product offerings. Getting the mix of software tools just right is probably more important than any one industry-labeled class of software, or any one product. Everybody’s software has to play nice in the sandbox to get us to the next level of adoption and productivity.
Here is one analyst cheering the champions of search and looking for continued growth in the industry…but not so big it fails.

Search Engines – Architecture Meets Adoption

Trying to summarize a technology space as varied as that covered in two days at the Search Engines Meeting in Boston, April 26-27, is a challenge and opportunity. Avoiding the challenge of trying to represent the full spectrum, I’ll stick with the opportunity. Telling you that search is everywhere, in every technology we use and has a multitude of cousins and affiliated companion technologies is important.
The Gilbane Group focuses on content technologies. In its early history this included Web content management, document management, and CMS systems for publishers and enterprises. We now track related technologies expanding to areas including standards like DITA and XML, adoption of social tools, plus rapid growth in the drive to localize and globalize content; Gilbane has kept up with these trends.
My area, search and more specifically “enterprise search” or search “behind the firewall,” was added just over three years ago. It seemed logical to give attention to the principal reason for creating, managing and manipulating content, namely finding it. When I pay attention to search engines, I am also thinking about adjoining content technologies. My recent interest is helping readers learn about how technology on both the search side and content management/manipulation side need better context; that means relating the two.
If one theme ran consistently through all the talks at Enterprise Search Meeting, it was the need to define search in relationship to so many other content technologies. The speakers, for the most part, did a fine job of making these connections.
Here are just some snippets:
Bipin Patel CIO of ProQuest, shared the technology challenges of maintaining a 24/7 service while driving improvements to the search usability interface. The goal is to deliver command line search precision to users who do not have the expertise to (or patience) to construct elaborate queries. Balancing the tension between expert searchers (usually librarians) with everyone else who seeks content underscores the importance of human factors. My take-away: underlying algorithms and architecture are worth little if usability is neglected.
Martin Baumgartel spoke on the Theseus project for the semantic search marketplace, a European collaborative initiative. An interesting point for me is their use of SMILA (SeMantic Information Logistics Architecture) from Eclipse. By following some links on the Eclipse site I found this interesting presentation from the International Theseus Convention in 2009. The application of this framework model underscores the interdependency of many semantically related technologies to improve search.
Tamas Doszkocs of the National Library of Medicine told a well-annotated story of the decades of search and content enhancement technologies that are evolving to contribute to semantically richer search experiences. His metaphors in the evolutionary process were fun and spot-on at a very practical level: Libraries as knowledge bases > Librarians as search engines > the Web as the knowledge base > Search engines as librarians > moving toward understanding, content, context, and people to bring us semantic search. A similar presentation is posted on the Web.
David Evans noted that there is currently no rigorous evaluation methodology yet for mobile search but is it very different than what we do with desktop search. One slide that I found most interesting was the Human Language Technologies (HLT) that contribute to a richer mobile search experience, essentially numerous semantic tools. Again, this underscores that the challenges of integrating sophisticated hardware, networking and search engine architectures for mobile search are just a piece of the solution. Adoption will depend on tools that enhance content findability and usability.
Jeff Fried of Microsoft/Fast talked about “social search” and put forth this important theme: that people like to connect to content through other people. He made me recognize how social tools are teaching us that the richness of this experience is a self-reinforcing mechanism toward “the best way to search.” It has lessons for enterprises as they struggle to adopt social tools in mindful ways in tandem with improving search experiences.
Shekhar Pradhan of Docunexus shared this relevant thought about a failure of interface architecture and that is (to paraphrase): the ubiquitous search box fails because it does not demand context or mechanisms for resolving ambiguity. Obviously, this breaks down adoption for enterprise search when it is the only option offered.
Many more talks from this meeting will get rolled up in future reports and blogs.
I want to learn your experiences and observations about semantic search and semantic technologies, as well. Please note that we have posted a brief survey for a short time at: Semantic Technology Survey. If you have any involvement with semantic technologies, please take it.