Technology and Books for All by Marie Lebert (best beach reads of all time TXT) 📖
- Author: Marie Lebert
- Performer: -
Book online «Technology and Books for All by Marie Lebert (best beach reads of all time TXT) 📖». Author Marie Lebert
Whether they were digitized 30 years ago or digitized now, all the books are captured in Plain Vanilla ASCII (the original 7-bit ASCII), with the same formatting rules, so they can be read easily by any machine, operating system or software, including on a PDA, a cellphone or an eBook reader. Any individual or organization is free to convert them to different formats, without any restriction except respect for copyright laws in the country involved.
In January 2004, Project Gutenberg had spread across the Atlantic with the creation of Project Gutenberg Europe. On top of its original mission, it also became a bridge between languages and cultures, with a number of national and linguistic sections. While adhering to the same principle: books for all and for free, through electronic versions that can be used and reproduced indefinitely. And, as a second step, the digitization of images and sound, in the same spirit.
1974: INTERNET[Overview]
When Project Gutenberg began in July 1971, the internet was not even born. On July 4, 1971, on Independence Day, Michael keyed in The United States Declaration of Independence (signed on July 4, 1776) to the mainframe he was using. In upper case, because there was no lower case yet. But to send a 5K file to the 100 users of the embryonic internet would have crashed the network. So Michael mentioned where the eText was stored (though without a hypertext link, because the web was still 20 years ahead). It was downloaded by six users. The internet was born in 1974 with the creation of TCP/IP (Transmission Control Protocol / Internet Protocol) by Vinton Cerf and Bob Kahn. It began spreading in 1983. It got a boost with the invention of the web in 1990 and of the first browser in 1993. At the end of 1997, there were 90 to 100 million users, with one million new users every month. At the end of 2000, there were over 300 million users.
1977: UNIMARC[Overview]
In 1977, the IFLA (International Federation of Library Associations) published the first edition of UNIMARC: Universal MARC Format, followed by a second edition in 1980 and a UNIMARC Handbook in 1983. UNIMARC (Universal Machine Readable Cataloging) is a common bibliographic format for library catalogs, as a solution to the 20 existing national MARC (Machine Readable Cataloging) formats, which meant lack of compatibility and extensive editing when bibliographical records were exchanged. With UNIMARC, catalogers would be able to process records created in any MARC format. Records in one MARC format would first be converted into UNIMARC, and then be converted into another MARC format.
[In Depth (published in 1999)]
At the time, the future of online catalogs was linked to the harmonization of the MARC format. Set up in the early 1970s, MARC is an acronym for Machine Readable Catalogue. This acronym is rather misleading as MARC is neither a kind of catalog nor a method of cataloguing. According to UNIMARC: An Introduction, a document of the Universal Bibliographic Control and International MARC Core Programme, MARC is "a short and convenient term for assigning labels to each part of a catalogue record so that it can be handled by computers. While the MARC format was primarily designed to serve the needs of libraries, the concept has since been embraced by the wider information community as a convenient way of storing and exchanging bibliographic data."
After MARC came MARC II. MARC II established rules to be followed consistently over the years. The MARC communication format intended to be "hospitable to all kinds of library materials; sufficiently flexible for a variety of applications in addition to catalogue production; and usable in a range of automated systems."
Over the years, however, despite cooperation efforts, several versions of MARC emerged, e.g. UKMARC, INTERMARC and USMARC, whose paths diverged because of different national cataloguing practices and requirements. We had an extended family of more than 20 MARC formats. Differences in data content meant some extensive editing was needed before records could be exchanged.
One solution to incompatible data was to create an international MARC format - called UNIMARC - which would accept records created in any MARC format. Records in one MARC format would first be converted into UNIMARC, and then be converted into another MARC format, so that each national bibliographic agency would need to write only two programs - one to convert into UNIMARC and one to convert from UNIMARC - instead of having to write twenty programs for the conversion of each MARC format (e.g. INTERMARC to UKMARC, USMARC to UKMARC etc.).
In 1977, the IFLA (International Federation of Library Associations and Institutions) published UNIMARC: Universal MARC Format, followed by a second edition in 1980 and a UNIMARC Handbook in 1983. These publications focused primarily on the cataloguing of monographs and serials, while taking into account international efforts towards the standardization of bibliographic information reflected in the ISBDs (International Standard Bibliographic Descriptions).
In the mid-1980s, UNIMARC expanded to cover documents other than monographs and serials. A new UNIMARC Manual was produced in 1987, with an updated description of UNIMARC. By this time UNIMARC had been adopted by several bibliographic agencies as their in-house format.
Developments didn't stop there. A standard for authorities files was set up in 1991, as explained on the website of IFLA in 1998: "Previously agencies had entered an author's name into the bibliographic format as many times as there were documents associated with him or her. With the new system they created a single authoritative form of the name (with references) in the authorities file; the record control number for this name was the only item included in the bibliographic file. The user would still see the name in the bibliographic record, however, as the computer could import it from the authorities file at a convenient time. So in 1991 UNIMARC/Authorities was published."
In 1991 a Permanent UNIMARC Committee was also created to regularly monitor the development of UNIMARC. Users realized that continuous maintenance - and not just the occasional rewriting of manuals - was needed, to make sure all changes were compatible with what already existed.
On top of adopting UNIMARC as a common format, The British Library (using UKMARC), the Library of Congress (using USMARC) and the National Library of Canada (using CAN/MARC) worked on harmonizing their national MARC formats. A three-year program to achieve a common MARC format was agreed on by the three libraries in December 1995.
Other libraries began using SGML (Standard Generalized Markup Language) as a common format for both the bibliographic records and the hypertextual and multimedia documents linked to them. As most publishers were using SGML for book records, librarians and publishers began working on a convergence between MARC and SGML. The Library of Congress worked on a DTD (Definition of Type of Document, which defines its logical structure) for the USMARC format. A DTD for the UNIMARC format was developed by the European Union. Some European libraries chose SGML to encode their bibliographic data. In the Belgian Union Catalog, for example, the use of SGML allowed to add descriptive elements and to facilitate the production of an annual CD-ROM.
1984: COPYLEFT[Overview]
The term "copyleft" was invented in 1984 by Richard Stallman, who was a computer scientist at MIT (Massachusetts Institute of Technology). "Copyleft is a general method for making a program or other work free, and requiring all modified and extended versions of the program to be free as well. (…) Copyleft says that anyone who redistributes the software, with or without changes, must pass along the freedom to further copy and change it. Copyleft guarantees that every user has freedom. (…) Copyleft is a way of using of the copyright on the program. It doesn't mean abandoning the copyright; in fact, doing so would make copyleft impossible. The word 'left' in 'copyleft' is not a reference to the verb 'to leave' — only to the direction which is the inverse of 'right'. (…) The GNU Free Documentation License (FDL) is a form of copyleft intended for use on a manual, textbook or other document to assure everyone the effective freedom to copy and redistribute it, with or without modifications, either commercially or non commercially." (excerpt from the GNU website)
1990: WEB[Overview]
The internet got its first boost with the invention of the web and its hyperlinks by Tim Berners-Lee at CERN (European Laboratory for Particle Physics) in 1990, and a second boost with the invention of the first browser Mosaic in 1993. The internet could now be used by anyone, and not only by computer literate people. There were 100 million internet users in December 1997, with one million new users per month, and 300 million internet users in December 2000. In summer 2000, the number of non-English-speaking users reached the number of English-speaking users, with a percentage of 50-50. According to Netcraft, an internet services company, the number of websites went from one million (April 1997) to 10 million (February 2000), 20 million (September 2000), 30 million (July 2001), 40 million (April 2003), 50 million (May 2004), 60 million (March 2005), 70 million (August 2005), 80 million (April 2006), 90 million (August 2006) and 100 million (November 2006).
[In Depth (published in 1999, updated in 2008)]
The World Wide Web -that became the Web or web- was invented by Tim Berners-Lee in 1989-90. In 1998, he stated: "The dream behind the web is of a common information space in which we communicate by sharing information. Its universality is essential: the fact that a hypertext link can point to anything, be it personal, local or global, be it draft or highly polished. There was a second part of the dream, too, dependent on the web being so generally used that it became a realistic mirror (or in fact the primary embodiment) of the ways in which we work and play and socialize. That was that once the state of our interactions was on line, we could then use computers to help us analyze it, make sense of what we are doing, where we individually fit in, and how we can better work together." (excerpt from: The World Wide Web: A very short personal history, May 1998.)
Christiane Jadelot, researcher at INaLF-Nancy (INaLF: National Institute of the French Language) wrote: "I began to really use the internet in 1994, with a browser called Mosaic. I found it a very useful way of improving my knowledge of computers, linguistics, literature… everything. I was finding the best and the worst, but as a discerning user, I had to sort it all out and make choices. I particularly liked the software for e-mail, file transfers and dial-up connections. At that time I had problems with a programme called Paradox and character sets that I couldn't use. I tried my luck and threw out a question in a specialist news group. I got answers from all over the world. Everyone seemed to want to solve my problem!" (July 1998)
The W3C (World Wide Web Consortium) was founded in October 1994 to develop interoperable technologies (specifications, guidelines, software and tools) for the web, as a forum for information, commerce, communication and collective understanding. The W3C develops common protocols to lead the evolution of the web, for example the specifications of HTML (HyperText Markup Language) and XML (eXtensible Markup Language). HTML is used for publishing hypertext on the web. XML was originally designed as a tool for large-scale electronic publishing. It now plays an increasingly important role in the exchange of a wide variety of data on the web and elsewhere.
According to the network tracking firm Netcraft, there were 100 million websites on November 1st, 2006. Previous milestones in the survey were reached in April 1997 (1 million sites), February 2000 (10 million), September 2000 (20 million), July 2001 (30 million), April 2003 (40 million), May 2004 (50 million), March 2005 (60 million), August 2005 (70 million), April 2006 (80 million ) and August 2006 (90 million).
1991: UNICODE[Overview]
First published in January 1991, Unicode is the universal character encoding maintained by the Unicode Consortium. "Unicode provides a unique number for every character, no matter what the platform, no matter what the program, no matter what the language." (excerpt from the website) This double-byte platform-independent encoding provides a basis for the processing, storage and interchange of text data in any language, and any modern software and information technology protocols. Unicode is a component of the W3C (World Wide Web Consortium) specifications.
1993: ONLINE BOOKS PAGE[Overview]
Founded in 1993 by John Mark Ockerbloom while he was a student at Carnegie Mellon University, The Online Books Page is "a website that facilitates access to books that are freely readable over the internet. It also aims to encourage the development of such online books, for the benefit and edification of all."
Comments (0)