Thursday 31 October 2019

History of Search Engines: From 1945 to Google Today

History of Search Engines: From 1945 to Google Today

History of Search Engines: From 1945 to Google Today

As We May Think (1945):

The concept of hypertext and a memory extension really came to life in July of 1945, when after enjoying the scientific camaraderie that was a side effect of WWII, Vannevar Bush's As We May Think was published in The Atlantic Monthly.
As We May Think.
He urged scientists to work together to help build a body of knowledge for all mankind. Here are a few selected sentences and paragraphs that drive his point home.
Specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.
The difficulty seems to be, not so much that we publish unduly in view of the extent and variety of present day interests, but rather that publication has been extended far beyond our present ability to make real use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.
A record, if it is to be useful to science, must be continuously extended, it must be stored, and above all it must be consulted.
He not only was a firm believer in storing data, but he also believed that if the data source was to be useful to the human mind we should have it represent how the mind works to the best of our abilities.
Our ineptitude in getting at the record is largely caused by the artificiality of the systems of indexing. ... Having found one item, moreover, one has to emerge from the system and re-enter on a new path.
The human mind does not work this way. It operates by association. ... Man cannot hope fully to duplicate this mental process artificially, but he certainly ought to be able to learn from it. In minor ways he may even improve, for his records have relative permanency.
Presumably man's spirit should be elevated if he can better review his own shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory.
He then proposed the idea of a virtually limitless, fast, reliable, extensible, associative memory storage and retrieval system. He named this device a memex.

Gerard Salton (1960s - 1990s):

Gerard Salton, who died on August 28th of 1995, was the father of modern search technology. His teams at Harvard and Cornell developed the SMART informational retrieval system. Salton’s Magic Automatic Retriever of Text included important concepts like the vector space model, Inverse Document Frequency (IDF), Term Frequency (TF), term discrimination values, and relevancy feedback mechanisms.
He authored a 56 page book called A Theory of Indexing which does a great job explaining many of his tests upon which search is still largely based. Tom Evslin posted a blog entry about what it was like to work with Mr. Salton.

Ted Nelson:

Ted Nelson created Project Xanadu in 1960 and coined the term hypertext in 1963. His goal with Project Xanadu was to create a computer network with a simple user interface that solved many social problems like attribution.
While Ted was against complex markup code, broken links, and many other problems associated with traditional HTML on the WWW, much of the inspiration to create the WWW was drawn from Ted's work.
There is still conflict surrounding the exact reasons why Project Xanadu failed to take off.

Advanced Research Projects Agency Network:

ARPANet is the network which eventually led to the internet. The Wikipedia has a great background article on ARPANet and Google Video has a free interesting video about ARPANet from 1972.

Archie (1990):

Archie.
The first few hundred web sites began in 1993 and most of them were at colleges, but long before most of them existed came Archie. The first search engine created was Archie, created in 1990 by Alan Emtage, a student at McGill University in Montreal. The original intent of the name was "archives," but it was shortened to Archie.
Archie helped solve this data scatter problem by combining a script-based data gatherer with a regular expression matcher for retrieving file names matching a user query. Essentially Archie became a database of web filenames which it would match with the users queries.

Veronica & Jughead:

As word of mouth about Archie spread, it started to become word of computer and Archie had such popularity that the University of Nevada System Computing Services group developed Veronica. Veronica served the same purpose as Archie, but it worked on plain text files. Soon another user interface name Jughead appeared with the same purpose as Veronica, both of these were used for files sent via Gopher, which was created as an Archie alternative by Mark McCahill at the University of Minnesota in 1991.

File Transfer Protocol:

Tim Burners-Lee existed at this point, however there was no World Wide Web. The main way people shared data back then was via File Transfer Protocol (FTP).
If you had a file you wanted to share you would set up an FTP server. If someone was interested in retrieving the data they could using an FTP client. This process worked effectively in small groups, but the data became as much fragmented as it was collected.

Tim Berners-Lee & the WWW (1991):

Tim Berners-Lee.
While an independent contractor at CERN from June to December 1980, Berners-Lee proposed a project based on the concept of hypertext, to facilitate sharing and updating information among researchers. With help from Robert Cailliau he built a prototype system named Enquire.
After leaving CERN in 1980 to work at John Poole's Image Computer Systems Ltd., he returned in 1984 as a fellow. In 1989, CERN was the largest Internet node in Europe, and Berners-Lee saw an opportunity to join hypertext with the Internet. In his words, "I just had to take the hypertext idea and connect it to the TCP and DNS ideas and — ta-da! — the World Wide Web". He used similar ideas to those underlying the Enquire system to create the World Wide Web, for which he designed and built the first web browser and editor (called WorldWideWeb and developed on NeXTSTEP) and the first Web server called httpd (short for HyperText Transfer Protocol daemon).
The first Web site built was at http://info.cern.ch/ and was first put online on August 6, 1991. It provided an explanation about what the World Wide Web was, how one could own a browser and how to set up a Web server. It was also the world's first Web directory, since Berners-Lee maintained a list of other Web sites apart from his own.
In 1994, Berners-Lee founded the World Wide Web Consortium (W3C) at the Massachusetts Institute of Technology.
Tim also created the Virtual Library, which is the oldest catalogue of the web. Tim also wrote a book about creating the web, titled Weaving the Web.

What is a Bot?

Robot Spider.
Computer robots are simply programs that automate repetitive tasks at speeds impossible for humans to reproduce. The term bot on the internet is usually used to describe anything that interfaces with the user or that collects data.
Search engines use "spiders" which search (or spider) the web for information. They are software programs which request pages much like regular browsers do. In addition to reading the contents of pages for indexing spiders also record links.
  • Link citations can be used as a proxy for editorial trust.
  • Link anchor text may help describe what a page is about.
  • Link co citation data may be used to help determine what topical communities a page or website exist in.
  • Additionally links are stored to help search engines discover new documents to later crawl.
Another bot example could be Chatterbots, which are resource heavy on a specific topic. These bots attempt to act like a human and communicate with humans on said topic.

Parts of a Search Engine:

Search engines consist of 3 main parts. Search engine spiders follow links on the web to request pages that are either not yet indexed or have been updated since they were last indexed. These pages are crawled and are added to the search engine index (also known as the catalog). When you search using a major search engine you are not actually searching the web, but are searching a slightly outdated index of content which roughly represents the content of the web. The third part of a search engine is the search interface and relevancy software. For each search query search engines typically do most or all of the following
  • Accept the user inputted query, checking to match any advanced syntax and checking to see if the query is misspelled to recommend more popular or correct spelling variations.
  • Check to see if the query is relevant to other vertical search databases (such as news search or product search) and place relevant links to a few items from that type of search query near the regular search results.
  • Gather a list of relevant pages for the organic search results. These results are ranked based on page content, usage data, and link citation data.
  • Request a list of relevant ads to place near the search results.
Searchers generally tend to click mostly on the top few search results, as noted in this article by Jakob Nielsen, and backed up by this search result eye tracking study.

Want to learn more about how search engines work?

Types of Search Queries:

Andrei Broder authored A Taxonomy of Web Search [PDF], which notes that most searches fall into the following 3 categories:
  • Informational - seeking static information about a topic
  • Transactional - shopping at, downloading from, or otherwise interacting with the result
  • Navigational - send me to a specific URL

Improve Your Searching Skills:

Want to become a better searcher? Most large scale search engines offer:
  • Advanced search pages which help searchers refine their queries to request files which are newer or older, local or in nature, from specific domains, published in specific formats, or other ways of refining search, for example the ~ character means related to Google.
  • Vertical search databases which may help structure the information index or limit the search index to a more trusted or better structured collection of sources, documents, and information.
Nancy Blachman's Google Guide offers searchers free Google search tips, and Greg R.Notess's Search Engine Showdown offers a search engine features chart.
There are also many popular smaller vertical search services. For example, Del.icio.us allows you to search URLs that users have bookmarked, and Technorati allows you to search blogs.

World Wide Web Wanderer:

Soon the web's first robot came. In June 1993 Matthew Gray introduced the World Wide Web Wanderer. He initially wanted to measure the growth of the web and created this bot to count active web servers. He soon upgraded the bot to capture actual URL's. His database became knows as the Wandex.
The Wanderer was as much of a problem as it was a solution because it caused system lag by accessing the same page hundreds of times a day. It did not take long for him to fix this software, but people started to question the value of bots.

ALIWEB:

In October of 1993 Martijn Koster created Archie-Like Indexing of the Web, or ALIWEB in response to the Wanderer. ALIWEB crawled meta information and allowed users to submit their pages they wanted indexed with their own page description. This meant it needed no bot to collect data and was not using excessive bandwidth. The downside of ALIWEB is that many people did not know how to submit their site.

Robots Exclusion Standard:

Martijn Kojer also hosts the web robots page, which created standards for how search engines should index or not index content. This allows webmasters to block bots from their site on a whole site level or page by page basis.
By default, if information is on a public web server, and people link to it search engines generally will index it.
In 2005 Google led a crusade against blog comment spam, creating a nofollow attribute that can be applied at the individual link level. After this was pushed through Google quickly changed the scope of the purpose of the link nofollow to claim it was for any link that was sold or not under editorial control.

Primitive Web Search:

By December of 1993, three full fledged bot fed search engines had surfaced on the web: JumpStation, the World Wide Web Worm, and the Repository-Based Software Engineering (RBSE) spider. JumpStation gathered info about the title and header from Web pages and retrieved these using a simple linear search. As the web grew, JumpStation slowed to a stop. The WWW Worm indexed titles and URL's. The problem with JumpStation and the World Wide Web Worm is that they listed results in the order that they found them, and provided no discrimination. The RSBE spider did implement a ranking system.
Since early search algorithms did not do adequate link analysis or cache full page content if you did not know the exact name of what you were looking for it was extremely hard to find it.

Excite:

Excite.
Excite came from the project Architext, which was started by in February, 1993 by six Stanford undergrad students. They had the idea of using statistical analysis of word relationships to make searching more efficient. They were soon funded, and in mid 1993 they released copies of their search software for use on web sites.
Excite was bought by a broadband provider named @Home in January, 1999 for $6.5 billion, and was named Excite@Home. In October, 2001 Excite@Home filed for bankruptcyInfoSpace bought Excite from bankruptcy court for $10 million.

Web Directories:

 

VLib:

The Virtual Library.
When Tim Berners-Lee set up the web he created the Virtual Library, which became a loose confederation of topical experts maintaining relevant topical link lists.

Global Network Navigator

O'Reilly Media began the GNN project in May 1993 & officially launched it in August 1993. It provided an online directory based upon the Whole Internet User's Guide and Catalog & it was also the first web site to contain clickable advertisements. Their first advertiser was a law firm named Heller, Ehrman, White and McAuliffe.
GNN was acquired by AOL in 1995 & shuttered in 1996.

EINet Galaxy

Galaxy.
The EINet Galaxy web directory was born in January of 1994. It was organized similar to how web directories are today. The biggest reason the EINet Galaxy became a success was that it also contained Gopher and Telnet search features in addition to its web search feature. The web size in early 1994 did not really require a web directory; however, other directories soon did follow.

Yahoo! Directory

Yahoo! Directory.
In April 1994 David Filo and Jerry Yang created the Yahoo! Directory as a collection of their favorite web pages. As their number of links grew they had to reorganize and become a searchable directory. What set the directories above The Wanderer is that they provided a human compiled description with each URL. As time passed and the Yahoo! Directory grew Yahoo! began charging commercial sites for inclusion. As time passed the inclusion rates for listing a commercial site increased. The current cost is $299 per year. Many informational sites are still added to the Yahoo! Directory for free.
On September 26, 2014, Yahoo! announced they would close the Yahoo! Directory at the end of 2014, though it was transitioned to being part of Yahoo! Small Business and remained online at business.yahoo.com.

Open Directory Project

The Open Directory Project.
In 1998 Rich Skrenta and a small group of friends created the Open Directory Project, which is a directory which anybody can download and use in whole or part. The ODP (also known as DMOZ) is the largest internet directory, almost entirely ran by a group of volunteer editors. The Open Directory Project was grown out of frustration webmasters faced waiting to be included in the Yahoo! Directory. Netscape bought the Open Directory Project in November, 1998. Later that same month AOL announced the intention of buying Netscape in a $4.5 billion all stock deal.
DMOZ closed on March 17, 2017. When the directory shut down it had 3,861,210 active listings in 90 languages. Numerous online mirrors of the directory have been published at DMOZtools.netODP.org & other locations.

LII

LII.
Google offers a librarian newsletter to help librarians and other web editors help make information more accessible and categorize the web. The second Google librarian newsletter came from Karen G. Schneider, who was the director of Librarians' Internet Index. LII was a high quality directory aimed at librarians. Her article explains what she and her staff look for when looking for quality credible resources to add to the LII. Most other directories, especially those which have a paid inclusion option, hold lower standards than selected limited catalogs created by librarians.
The LII was later merged into the Internet Public Library, which was another well kept directory of websites that went into archive-only mode after 20 years of service.

Business.com

Business.com Directory.
Due to the time intensive nature of running a directory, and the general lack of scalability of a business model the quality and size of directories sharply drops off after you get past the first half dozen or so general directories. There are also numerous smaller industry, vertically, or locally oriented directories. Business.com, for example, is a directory of business websites.
Business.com was a high-profile purchase for local directory company R.H. Donnelley. Unfortunately that $345 milion deal on July 26, 2007 only accelerated the bankruptcy of R.H. Donnelley, which let them to sell the Business.com directory to Resource Nation in February of 2011. The Google Panda algorithm hit Business.com, which made it virtually impossible for the site to maintain a strong cashflow based on organic search rankings. Business.com was once again sold in June of 2016 to the Purch Group.

Looksmart

Looksmart.
Looksmart was founded in 1995. They competed with the Yahoo! Directory by frequently increasing their inclusion rates back and forth. In 2002 Looksmart transitioned into a pay per click provider, which charged listed sites a flat fee per click. That caused the demise of any good faith or loyalty they had built up, although it allowed them to profit by syndicating those paid listings to some major portals like MSN. The problem was that Looksmart became too dependant on MSN, and in 2003, when Microsoft announced they were dumping Looksmart that basically killed their business model.
In March of 2002, Looksmart bought a search engine by the name of WiseNut, but it never gained traction. Looksmart also owns a catalog of content articles organized in vertical sites, but due to limited relevancy Looksmart has lost most (if not all) of their momentum. In 1998 Looksmart tried to expand their directory by buying the non commercial Zeal directory for $20 million, but on March 28, 2006 Looksmart shut down the Zeal directory, and hope to drive traffic using Furl, a social bookmarking program.

Search Engines vs Directories:

All major search engines have some limited editorial review process, but the bulk of relevancy at major search engines is driven by automated search algorithms which harness the power of the link graph on the web. In fact, some algorithms, such as TrustRank, bias the web graph toward trusted seed sites without requiring a search engine to take on much of an editorial review staff. Thus, some of the more elegant search engines allow those who link to other sites to in essence vote with their links as the editorial reviewers.
Unlike highly automated search engines, directories are manually compiled taxonomies of websites. Directories are far more cost and time intensive to maintain due to their lack of scalability and the necessary human input to create each listing and periodically check the quality of the listed websites.
General directories are largely giving way to expert vertical directories, temporal news sites (like blogs), and social bookmarking sites (like del.ici.ous). In addition, each of those three publishing formats I just mentioned also aid in improving the relevancy of major search engines, which further cuts at the need for (and profitability of) general directories.
Here is a great background video on the history of search.

WebCrawler:

WebCrawler.
Brian Pinkerton of the University of Washington released WebCrawler on April 20, 1994. It was the first crawler which indexed entire pages. Soon it became so popular that during daytime hours it could not be used. AOL eventually purchased WebCrawler and ran it on their network. Then in 1997, Excite bought out WebCrawler, and AOL began using Excite to power its NetFind. WebCrawler opened the door for many other services to follow suit. Within 1 year of its debuted came Lycos, Infoseek, and OpenText.

Lycos:

Lycos.
Lycos was the next major search development, having been design at Carnegie Mellon University around July of 1994. Michale Mauldin was responsible for this search engine and remains to be the chief scientist at Lycos Inc.
On July 20, 1994, Lycos went public with a catalog of 54,000 documents. In addition to providing ranked relevance retrieval, Lycos provided prefix matching and word proximity bonuses. But Lycos' main difference was the sheer size of its catalog: by August 1994, Lycos had identified 394,000 documents; by January 1995, the catalog had reached 1.5 million documents; and by November 1996, Lycos had indexed over 60 million documents -- more than any other Web search engine. In October 1994, Lycos ranked first on Netscape's list of search engines by finding the most hits on the word ‘surf.’.

Infoseek:

Infoseek also started out in 1994, claiming to have been founded in January. They really did not bring a whole lot of innovation to the table, but they offered a few add on's, and in December 1995 they convinced Netscape to use them as their default search, which gave them major exposure. One popular feature of Infoseek was allowing webmasters to submit a page to the search index in real time, which was a search spammer's paradise.

AltaVista:

AltaVista.
AltaVista debut online came during this same month. AltaVista brought many important features to the web scene. They had nearly unlimited bandwidth (for that time), they were the first to allow natural language queries, advanced searching techniques and they allowed users to add or delete their own URL within 24 hours. They even allowed inbound link checking. AltaVista also provided numerous search tips and advanced search features.
Due to poor mismanagement, a fear of result manipulation, and portal related clutter AltaVista was largely driven into irrelevancy around the time Inktomi and Google started becoming popular. On February 18, 2003, Overture signed a letter of intent to buy AltaVista for $80 million in stock and $60 million cash. After Yahoo! bought out Overture they rolled some of the AltaVista technology into Yahoo! Search, and occasionally use AltaVista as a testing platform.

Inktomi:

Inktomi.
The Inktomi Corporation came about on May 20, 1996 with its search engine Hotbot. Two Cal Berkeley cohorts created Inktomi from the improved technology gained from their research. Hotwire listed this site and it became hugely popular quickly.
In October of 2001 Danny Sullivan wrote an article titled Inktomi Spam Database Left Open To Public, which highlights how Inktomi accidentally allowed the public to access their database of spam sites, which listed over 1 million URLs at that time.
Although Inktomi pioneered the paid inclusion model it was nowhere near as efficient as the pay per click auction model developed by Overture. Licensing their search results also was not profitable enough to pay for their scaling costs. They failed to develop a profitable business model, and sold out to Yahoo! for approximately $235 million, or $1.65 a share, in December of 2003.

Ask.com (Formerly Ask Jeeves):

Ask Jeeves.
In April of 1997 Ask Jeeves was launched as a natural language search engine. Ask Jeeves used human editors to try to match search queries. Ask was powered by DirectHit for a while, which aimed to rank results based on their popularity, but that technology proved to easy to spam as the core algorithm component. In 2000 the Teoma search engine was released, which uses clustering to organize sites by Subject Specific Popularity, which is another way of saying they tried to find local web communities. In 2001 Ask Jeeves bought Teoma to replace the DirectHit search technology.
Jon Kleinberg's Authoritative sources in a hyperlinked environment [PDF] was a source of inspiration what lead to the eventual creation of Teoma. Mike Grehan's Topic Distillation [PDF] also explains how subject specific popularity works.
On Mar 4, 2004, Ask Jeeves agreed to acquire Interactive Search Holdings for 9.3 million shares of common stock and options and pay $150 million in cash. On March 21, 2005 Barry Diller's IAC agreed to acquire Ask Jeeves for 1.85 billion dollars. IAC owns many popular websites like Match.com, Ticketmaster.com, and Citysearch.com, and is promoting Ask across their other properties. In 2006 Ask Jeeves was renamed to Ask, and they killed the separate Teoma brand.

AllTheWeb

AllTheWeb.
AllTheWeb was a search technology platform launched in May of 1999 to showcase Fast's search technologies. They had a sleek user interface with rich advanced search features, but on February 23, 2003, AllTheWeb was bought by Overture for $70 million. After Yahoo! bought out Overture they rolled some of the AllTheWeb technology into Yahoo! Search, and occasionally use AllTheWeb as a testing platform.

Meta Search Engines

Most meta search engines draw their search results from multiple other search engines, then combine and rerank those results. This was a useful feature back when search engines were less savvy at crawling the web and each engine had a significantly unique index. As search has improved the need for meta search engines has been reduced.
Hotbot was owned by Wired, had funky colors, fast results, and a cool name that sounded geeky, but died off not long after Lycos bought it and ignored it. Upon rebirth it was born as a meta search engine. Unlike most meta search engines, Hotbot only pulls results from one search engine at a time, but it allows searchers to select amongst a few of the more popular search engines on the web. Currently Dogpile, owned by Infospace, is probably the most popular meta search engine on the market, but like all other meta search engines, it has limited market share.
One of the larger problems with meta search in general is that most meta search engines tend to mix pay per click ads in their organic search results, and for some commercial queries 70% or more of the search results may be paid results. I also created Myriad Search, which is a free open source meta search engine without ads.

Vertical Search

The major search engines are fighting for content and marketshare in verticals outside of the core algorithmic search product. For example, both Yahoo and MSN have question answering services where humans answer each other's questions for free. Google has a similar offering, but question answerers are paid for their work.
Google, Yahoo, and MSN are also fighting to become the default video platform on the web, which is a vertical where an upstart named YouTube also has a strong position.
Yahoo and Microsoft are aligned on book search in a group called the Open Content Alliance. Google, going it alone in that vertical, offers a proprietary Google Book search.
All three major search engines provide a news search service. Yahoo! has partnered with some premium providers to allow subscribers to include that content in their news search results. Google has partnered with the AP and a number of other news sources to extend their news database back over 200 years. And Topix.net is a popular news service which sold 75% of its ownership to 3 of the largest newspaper companies. Thousands of weblogs are updated daily reporting the news, some of which are competing with (and beating out) the mainstream media. If that were not enough options for news, social bookmarking sites like Del.icio.us frequently update recently popular lists, there are meme trackers like Techmeme that track the spread of stories through blogs, and sites like Digg allow their users to directly vote on how much exposure each item gets.
Google also has a Scholar search program which aims to make scholarly research easier to do.
In some verticals, like shopping search, other third party players may have significant marketshare, gained through offline distribution and branding (for example, yellow pages companies), or gained largely through arbitraging traffic streams from the major search engines.
On November 15, 2005 Google launched a product called Google Base, which is a database of just about anything imaginable. Users can upload items and title, describe, and tag them as they see fit. Based on usage statistics this tool can help Google understand which vertical search products they should create or place more emphasis on. They believe that owning other verticals will allow them to drive more traffic back to their core search service. They also believe that targeted measured advertising associated with search can be carried over to other mediums. For example, Google bought dMarc, a radio ad placement firm. Yahoo! has also tried to extend their reach by buying other high traffic properties, like the photo sharing site Flickr, and the social bookmarking site del.icio.us.
After a couple years of testing, on May 5th, 2010 Google unveiled a 3 column search result layout which highlights many vertical search options in the left rail.
Google shut down their financial services comparison search tool Google Comparison on March 23, 2016. Google has continued to grow their Product Listing Ads, local inventory ads & hotel ads while shedding many other vertical search functions. When Google shut down their financial comparison search tool they shifted from showing a maximum of 3 ads at the top of the search results to showing a maximum of 4 ads above the organic search results.

Search & The Mobile Web

By 2015 mobile accounted for more than half of digital ad spending. The online ad market is growing increasingly consolidated with Google & Facebook eating almost all of the growth.
Google's dominance over desktop search is only increased on mobile, as Google pays Apple to be the default search provider in iOS & Android's secret contracts bundled Google as the default search option.
Due to the increasing importance of mobile Google shifted to showing search results in a single column on desktop computers, with the exceptions of sometimes showing knowledge graph cards or graphic Product Listing Ads in the rightt column of the desktop search results.
Ad blocking has become widespread on both mobile devices & desktop computers. However Google pays AdBlock Plus to allow ads to show on Google.com & Facebook can bypass ad blocking with inline ads in their mobile application. Most other publishers have had much less luck in dealing with the rise of ad blockers.
As publishers have been starved for revenues, some publishers like Tronc have sacrificed user experience by embedding thousands of auto-playing videos in their articles. This in turn only accelerates the demand for ad blockers.
Facebook & Google have been in a fight to increase their time on site on mobile devices. Facebook introduced Instant Articles which ports publisher articles into Facebook, however publishers struggled to monetize the exposure.
Google launched a competing solution called Accelerated Mobile Pages (AMP) & publishers also struggled to monetize that audience.

A Post-fact Web

An additional factor which has stripped audience from publisher websites is the increasing number of featured snippets, knowledge graph results & interactive tools embedded directly in Google's search results. These features may extract the value of publisher's websites without sending them anything in return. And they have also caused issues when Google's algorthims chose to display factually incorrect answers.
Both Google & Facebook are partly to blame for the decline of online publishing. As they defunded web publishers it encouraged more outrageous publishing behaviors.
Another contributing factor to the decline of online publishing is how machine learning algorithms measure engagement and fold it back into ranking. People are more often to share or link to something which confirms their political ideology, while fairly neutral pieces that are not biased may have some analysis in them that almost everyone hates, which means they have a less loyal following & readers are less likely to share their work. That in turn makes the work less likely to be seen on social networks like Facebook or rank high in Google search results.

Search Engine Marketing

Search engine marketing is marketing via search engines, done through organic search engine optimization, paid search engine advertising, and paid inclusion programs.