Pencarian

Custom Search

Populer

Senin, 24 Oktober 2011
Much of the "Shareware" movement was started via user distribution of software through BBSes. A notable example was Phil Katz's PKARC (and later PKZIP, using the same ".zip" algorithm that WinZip and other popular archivers now use); also other concepts of software distribution like freeware, postcardware like JPEGview and donationware like Red Ryder for the Macintosh first appeared on BBS sites. Doom from id Software and many Apogee games were distributed as shareware. The Internet has largely erased the distinction of shareware - most users now download the software directly from the developer's website rather than receiving it from another BBS user 'sharing' it. Today shareware is commonly used to mean electronically-distributed software from a small developer.
Many commercial BBS software companies that continue to support their old BBS software products switched to the shareware model or made it entirely free. Some companies were able to make the move to the Internet and provide commercial products with BBS capabilities.
Most early BBSes operated as stand-alone islands. Information contained on that BBS never left the system, and users would only interact with the information and user community on that BBS alone. However, as BBSes became more widespread, there evolved a desire to connect systems together to share messages and files with distant systems and users. The largest such network was FidoNet.
As is it was prohibitively expensive for the hobbyist SysOp to have a dedicated connection to another system, FidoNet was developed as a store and forward network. Private electronic mail (Netmail), public message boards (Echomail) and eventually even file attachments on a FidoNet-capable BBS would be bundled into one or more archive files over a set time interval. These archive files were then compressed with ARC or ZIP and forwarded to (or polled by) another nearby node or hub via a dialup Xmodem session. Messages would be relayed around various FidoNet hubs until they were eventually delivered to their destination. The hierarchy of FidoNet BBS nodes, hubs, and zones was maintained in a routing table called a Nodelist. Some larger BBSes or regional FidoNet hubs would make several transfers per day, some even to multiple nodes or hubs, and as such, transfers usually occurred at night or early morning when toll rates were lowest. In Fido's heyday, sending a Netmail message to a user on a distant FidoNet node, or participating in an Echomail discussion could take days, especially if any FidoNet nodes or hubs in the message's route only made one transfer call per day.
FidoNet was platform-independent and would work with any BBS that was written to use it. BBSes that did not have integrated FidoNet capability could usually add it using an external FidoNet front-end mailer such as FrontDoor, BinkleyTerm, InterMail or D'Bridge, and a mail processor such as FastEcho or Squish. The front-end mailer would conduct the periodic FidoNet transfers, while the mail processor would usually run just before and just after the mailer ran. This program would scan for and pack up new outgoing messages, and then unpack, sort and "toss" the incoming messages into a BBS user's local electronic mailbox or into the BBS's local message bases reserved for Echomail. As such, these mail processors were commonly called "scanner/tosser/packers."
Many other BBS networks followed the example of FidoNet, using the same standards and the same software. These were called FidoNet Technology Networks (FTNs). They were usually smaller and targeted at selected audiences. Some networks used QWK doors, and others such as RelayNet (RIME) and WWIVnet used non-Fido software and standards.
Before commercial Internet access became common, these networks of BBSes provided regional and international e-mail and message bases. Some even provided gateways, such as UFGATE, by which members could send/receive e-mail to/from the Internet via UUCP, and many FidoNet discussion groups were shared via Usenet. Elaborate schemes allowed users to download binary files, search gopherspace, and interact with distant programs, all using plain text e-mail.
As the volume of FidoNet Mail increased and newsgroups from the early days of the Internet became available, satellite data downstream services became viable for larger systems. The satellite service provided access to FidoNet and Usenet newsgroups in large volumes at a reasonable fee. By connecting a small dish & receiver, a constant downstream of thousands of FidoNet and Usenet newsgroups could be received. The local BBS only needed to upload new outgoing messages via the modem network back to the satellite service. This method drastically reduced phone data transfers while dramatically increasing the number of message forums.
FidoNet is still in use today, though in a much smaller form, and many Echomail groups are still shared with Usenet via FidoNet to Usenet gateways. Widespread abuse of Usenet with spam and pornography has led to many of these FidoNet gateways to cease operation completely.
Since early BBS' were frequently run by computer hobbyists, they were typically technical in nature with user communities revolving around hardware and software discussions. Many SysOps were transplants of the amateur radio community and thus amateur and packet radio were often popular topics.
As the BBS phenomenon grew, so did the popularity of special interest boards. Bulletin Board Systems could be found for almost every hobby and interest. Popular interests included politics, religion, music, dating, and alternative lifestyles. Many SysOps also adopted a theme in which they customized their entire BBS (welcome screens, prompts, menus, and so on.) to reflect that theme. Common themes were based on fantasy, or were intended to give the user the illusion of being somewhere else, such as in a sanatorium, wizard's castle, or on a pirate ship.
In the early days, the file download library consisted of files that the SysOps obtained themselves from other BBS and friends. Many BBSes inspected every file uploaded to their public file download library to ensure that the material did not violate copyright law. As time went on, Shareware CD ROMs were sold with up to thousands of files on each CD ROM. Small BBS copied each file individually to their hard drive. Some systems used a CD ROM drive to make the files available. Advanced BBS used Multiple CD ROM disk changer units that switched 6 CD ROM disks on demand for the caller(s). Large systems used all 26 DOS Drive letters with multi-disk changers housing tens of thousands of copyright free shareware or freeware files available to all callers. These BBSes were generally more family friendly, avoiding the seedier side of BBSes. Access to these systems varied from single to multiple modem lines with some requiring little or no confirmed registration.
Some BBSes, called elite, warez or pirate boards, were exclusively used for distributing pirated software, phreaking, and other questionable or unlawful content. These BBSes often had multiple modems and phone lines, allowing several users to upload and download files at once. Most elite BBSes used some form of new user verification, where new users would have to apply for membership and attempt to prove that they were not a law enforcement officer or a lamer. The largest elite boards accepted users by invitation only. Elite boards also spawned their own subculture and gave rise to the slang known today as leetspeak.
Another common type of board was the "support BBS" run by a manufacturer of computer products or software. These boards were dedicated to supporting users of the company's products with question and answer forums, news and updates, and downloads. Most of them were not a free call. Today, these services have moved to the web.
Access group editor in a developer build of OpenTG BBS
Some general purpose Bulletin Board Systems had special levels of access that were given to those who paid extra money, uploaded useful files or knew the sysop personally. These specialty and pay BBSes usually had something special to offer their users such as large file libraries, warez, pornography, chat rooms or Internet access.
Pay BBSes such as The WELL and Echo NYC (now Internet forums rather than dial-up), ExecPC, and MindVox (which folded in 1996) were admired for their tightly-knit communities and quality discussion forums. However, many "free" BBSes also maintained close knit communities, and some even had annual or bi-annual events where users would travel great distances to meet face-to-face with their on-line friends. These events were especially popular with BBSes that offered chat rooms.
Some of the BBSes that provided access to illegal content did wind up in trouble. On July 12, 1985, in conjunction with a credit card fraud investigation, the Middlesex County, NJ Sheriff's department raided and seized The Private Sector BBS, which was the official BBS for grey hat hacker quarterly 2600 Magazine at the time.[5] The notorious Rusty n Edie's BBS, in Boardman, Ohio, was raided by the FBI in January 1993 for software piracy, and in November 1997 sued by Playboy for copyright infringement. In Flint, Michigan, a 21 year old man was charged with distributing child pornography through his BBS in March 1996.[6]
BBSes were generally text-based, rather than GUI-based, and early BBSes conversed using the simple ASCII character set. However, some home computer manufacturers extended the ASCII character set to take advantage of the advanced color and graphics capabilities of their systems. BBS software authors included these extended character sets in their software, and terminal program authors included the ability to display them when a compatible system was called. Atari's native character set was known as ATASCII, while most Commodore BBSes supported PETSCII. PETSCII was also supported by the nationwide online service Quantum Link.[nb 2]
The use of these custom character sets was generally incompatible between manufacturers. Unless a caller was using terminal emulation software written for, and running on, the same type of system as the BBS, the session would simply fall back to simple ASCII output. For example, a Commodore 64 user calling an Atari BBS would use ASCII rather than the machine's native character set. As time progressed, most terminal programs began using the ANSI standard, but could use their native character set if it was available.
COCONET, a BBS system made by Coconut Computing, Inc., was released in 1988 and only supported a GUI interface (no text interface was available), and worked in EGA/VGA graphics mode, which made it stand out from the text-based BBS systems. COCONET's bitmap and vector graphics and support for multiple type fonts were inspired by the PLATO system, and the graphics capabilities were based on what was available in the Borland BGI graphics library. A number of companies wanted to license the COCONET GUI but Coconut Computing chose not to, and as a result, a competing approach called Remote Imaging Protocol (RIP) emerged and was promoted by Telegrafix in the early to mid 1990s but it never became widespread. A similar technology called NAPLPS was also considered, and although it became the underlying graphics technology behind the Prodigy service, it never gained popularity in the BBS market. There were several GUI-based BBS's on the Apple Macintosh platform, including TeleFinder and FirstClass, but these remained widely used only in the Mac market.
In the UK, the BBC Micro based OBBS software, available from Pace for use with their modems, optionally allowed for colour and graphics using the Teletext based graphics mode available on that platform. Other systems used the Viewdata protocols made popular in the UK by British Telecom's Prestel service, and the on-line magazine Micronet 800 whom were busy giving away modems with their subscriptions.
The most popular form of online graphics was ANSI art, which combined the IBM Extended ASCII character set's blocks and symbols with ANSI escape sequences to allow changing colors on demand, provide cursor control and screen formatting, and even basic musical tones. During the late 1980s and early 1990s, most BBSes used ANSI to make elaborate welcome screens, and colorized menus, and thus, ANSI support was a sought-after feature in terminal client programs. The development of ANSI art became so popular that it spawned an entire BBS "artscene" subculture devoted to it.
Amiga program Skyline BBS was the first in 1987 featuring a script markup language communication protocol called Skypix which was capable to give the user a complete graphical interface, featuring rich graphic content, changeable fonts, mouse-controlled actions, animations and sound.[4]
Today, most BBS software that is still actively supported, such as WorldGroup, Wildcat! BBS and Citadel/UX, is Web-enabled, and the traditional text interface has been replaced (or operates concurrently) with a Web-based user interface. For those more nostalgic for the true BBS experience, one can use NetSerial (Windows) or DOSBox (Windows/*nix) to redirect DOS COM port software to telnet, allowing them to connect to Telnet BBSes using 1980s and 1990s era modem terminal emulation software, like Telix, Terminate, Qmodem and Procomm Plus. Modern 32-bit terminal emulators such as mTelnet and SyncTerm include native telnet support.
Unlike modern websites and online services that are typically hosted by third-party companies in commercial data centers, BBS computers (especially for smaller boards) were typically operated from the SysOp's home. As such, access could be unreliable, and in many cases only one user could be on the system at a time. Only larger BBSs with multiple phone lines using specialized hardware, multitasking software, or a LAN connecting multiple computers, could host multiple simultaneous users.
The first BBSes used homebrew software,[nb 1] quite often written or customized by the SysOps themselves, running on early S-100 microcomputer systems such as the Altair, IMSAI and Cromemco under the CP/M operating system. Soon after, BBS software was being written for all of the major home computer systems of the late 1970s era - the Apple II, Atari, Commodore and TRS-80 being some of the most popular.
A few years later, in 1981, IBM introduced the first DOS based IBM PC, and due to the overwhelming popularity of PCs and their clones, DOS soon became the operating system on which the majority of BBS programs were run. RBBS-PC, ported over from the CP/M world, and Fido BBS, created by Tom Jennings (who later founded FidoNet) were the first notable DOS BBS programs. There were many successful commercial BBS programs developed for DOS, such as PCBoard BBS, RemoteAccess BBS, and Wildcat! BBS which had early roots from the Colossus BBS started by the author of the popular shareware communications program Qmodem. Some popular freeware BBS programs for MS-DOS included Telegard BBS and Renegade BBS, which both had early origins from leaked WWIV BBS source code. There were several dozen other BBS programs developed over the DOS era, and many were released under the shareware concept, while some were released as freeware including iniquity.
During the mid-1980s, many sysops opted for the less expensive, ubiquitous Commodore 64 (introduced in 1982), which was popular among software pirate groups. Popular commercial BBS programs were Blue Board, Ivory BBS, Color64 and CNet 64. In the early 1990s a small number of BBSes were also running on the Commodore Amiga models 500, 1000 and 1200 (using external hard drives), and the Amiga 2000, Amiga 3000 and Amiga 4000 (which had built-in hard drives). Popular BBS software for the Amiga were ABBS, Amiexpress, StormforceBBS, Infinity and Tempest.
MS-DOS continued to be the most popular operating system for BBS use up until the mid-1990s, and in the early years most multi-node BBSes were running under a DOS based multitasker such as DesqView or consisted of multiple computers connected via a LAN. (Around 1990, OS/2 came out with "preemptive multitasking" of DOS, an alternative to DESQview for multi-node BBS.) In the late 1980s, a handful of BBS developers implemented multitasking communications routines which, although run under MS-DOS, allowed multiple phone lines and multiple users to connect to the same physical BBS computer. These included Galacticomm's MajorBBS (later WorldGroup), eSoft TBBS, and Falken. A lot of the code for the BBS systems was still written in Assembler or Pascal, There was only a minority of C code.
By 1995, many of the DOS-based BBSes had begun switching to modern multitasking operating systems, such as OS/2, Windows 95, and Linux. TCP/IP networking allowed most of the remaining BBSes to evolve and include Internet hosting capabilities. Recent BBS software, such as Synchronet, EleBBS, DOC or Wildcat! BBS provide access using the Telnet protocol rather than dialup, or by using legacy MS-DOS based BBS software with a FOSSIL-to-Telnet redirector such as NetFoss.
A precursor to the public Bulletin Board System was Community Memory, started in August, 1973 in Berkeley, California, using hardwired terminals located in neighborhoods.[1]
The first public dial-up Bulletin Board System was developed by Ward Christensen. According to an early interview, while he was snowed in during the Great Blizzard of 1978 in Chicago, Christensen along with fellow hobbyist Randy Suess, began preliminary work on the Computerized Bulletin Board System, or CBBS. CBBS went online on February 16, 1978 in Chicago, Illinois.[2] CBBS, which kept a count of callers, reportedly connected 253,301 callers before it was finally retired.[citation needed]
With the original 110 and 300 baud modems of the late 1970s, BBSes were particularly slow, but speed improved with the introduction of 1200 bit/s modems in the early 1980s, and this led to a substantial increase in popularity. The demand for complex ANSI and ASCII screens and larger file transfers taxed available channel capacity, which in turn propelled demand for faster modems.
Most of the information was displayed using ordinary ASCII text or ANSI art, though some BBSes experimented with higher resolution visual formats such as the innovative but obscure Remote Imaging Protocol. Many systems became quite sophisticated in graphic presentation, especially considering that the system was confined to ASCII codes. Several systems attempted to simulate the appearance of GUI displays which were just appearing as DOS add-ons or Apple systems. Probably the ultimate development of graphic presentations was the Dynamic page implementation of the University of Southern California BBS (USCBBS) by Susan Biddlecomb, which predated the implementation of the HTML Dynamic web page. A complete "Dynamic web page" implementation was accomplished using TBBS with a TDBS add-on presenting a complete menu system individually customized for each user.
During the mid 1980s, a very popular BBS software "RBBS-PC" became commonly used by students, schools, churches and more. One of the largest BBSes of the time was known as "Avery I" and run by a young System Operator from a small town in North Carolina (Greg J. Gardner). This was one of the largest private and non-profit BBSes of the time.
Towards the early 1990s, the BBS industry became so popular that it spawned three monthly magazines, Boardwatch, BBS Magazine, and in Asia and Australia, Chips 'n Bits Magazine which devoted extensive coverage of the software and technology innovations and people behind them, and listings to US and worldwide BBSes.[3] In addition, in the USA, a major monthly magazine, Computer Shopper, carried a list of BBSes along with a brief abstract of each of their offerings.
According to the FidoNet Nodelist, BBSes reached their peak usage around 1996, which was the same period that the World Wide Web suddenly became mainstream. BBSes rapidly declined in popularity thereafter, and were replaced by systems using the Internet for connectivity. Some of the larger commercial BBSes, such as ExecPC BBS, became actual Internet Service Providers.
The website textfiles.com serves as an archive that documents the history of the BBS. The owner of textfiles.com, Jason Scott, also produced BBS: The Documentary, a DVD film that chronicles the history of the BBS and features interviews with well-known people (mostly from the United States) from the heyday BBS era.
The historical BBS list on textfiles.com contains over 105,000 BBSes that have existed over a span of 20 years in North America alone.
A Bulletin Board System, or BBS, is a computer system running software that allows users to connect and log in to the system using a terminal program. Once logged in, a user can perform functions such as uploading and downloading software and data, reading news and bulletins, and exchanging messages with other users, either through electronic mail or in public message boards. Many BBSes also offer on-line games, in which users can compete with each other, and BBSes with multiple phone lines often provide chat rooms, allowing users to interact with each other.
Originally BBSes were accessed only over a phone line using a modem, but by the early 1990s some BBSes allowed access via a Telnet, packet switched network, or packet radio connection.
Ward Christensen coined the term "Bulletin Board System" as a reference to the traditional cork-and-pin bulletin board often found in entrances of supermarkets, schools, libraries or other public areas where people can post messages, advertisements, or community news. By "computerizing" this method of communications, the name of the system was born: CBBS - Computerized Bulletin Board System. See History.
During their heyday from the late 1970s to the mid 1990s, most BBSes were run as a hobby free of charge by the system operator (or "SysOp"), while other BBSes charged their users a subscription fee for access, or were operated by a business as a means of supporting their customers. Bulletin Board Systems were in many ways a precursor to the modern form of the World Wide Web, social network services and other aspects of the Internet.
Early BBSes were often a local phenomenon, as one had to dial into a BBS with a phone line and would have to pay additional long distance charges for a BBS out of the local calling area. Thus, many users of a given BBS usually lived in the same area, and activities such as BBS Meets or Get Togethers were common, where users of the board would gather at a local restaurant, the SysOps home or similar venue and meet face to face.
As the use of the Internet became more widespread in the mid to late 1990s, traditional BBSes rapidly faded in popularity. Today, Internet forums occupy much of the same social and technological space as BBSes did, and the term BBS is often used to refer to any online forum or message board.
Although BBSing survives only as a niche hobby in most parts of the world, it is still an extremely popular form of communication for Taiwanese youth (see PTT Bulletin Board System). Most BBSes are now accessible over telnet and typically offer free email accounts, FTP services, IRC and all of the protocols commonly used on the Internet.
When publishing on the internet the most commonly accepted practice is to write articles having catchy title using relevant keywords and 400-1500 words in the body of the article. Since the primary goal behind article marketing is to get search engine traffic, authors generally incorporate relevant keywords or keyphrases in their articles. The generally accepted keyword density for most article directories is about 2% to 3% and anything above that can be considered as keyword stuffing. Most article directories currently do not accept HTML tags in either title or in the body of the article.
Internet article marketing, is an Internet marketing approach to subtly promote products and services online via article directories. Article directories having good web page rank receives excessive quantity of site visitors and are thought-about authority sites by search engines, which often leads to receiving good quantity of traffic. These directories then go on PageRank to the author's website and in addition sends traffic from readers.
Internet marketers will typically attempt to maximize the results of an article advertising campaign by submitting their articles to a number of article directories. However, most of the major search engines want to filter duplicate content to stop the identical content material from appearing multiple times in searches. Some marketers attempt to circumvent this filter by creating a number of variations of an article, known as article spinning. By doing this, one article can theoretically acquire site visitors from a number of article directories.
Having your content featured on niche blogs and focused content sites run and managed by others is a popular form of article marketing. Becoming a Guest Blogger on these sites introduces the author to interested parties that are otherwise unreachable.
Most forms of search engine optimization and Internet marketing require a domain, internet hosting plan, and promoting budget. However, article advertising makes use of article directories as a free host and receives traffic by way of organic searches due to the listing's search engine authority. This can be very useful to new internet entrepreneurs because it does not require a big budget.
Article marketing has been used by professionals for nearly as long as mass print has been available. In paper-print form (as opposed to online forms), article marketing is utilized commonly by business owners as a means of obtaining free press space. A business provides useful content to the newspaper and in return the newspaper prints the business contact information with the given article. Because newspapers and other traditional media are expected to present content on limited budgets, this arrangement is generally advantageous for all parties involved.
For example, an accounting firm may market itself by writing an article titled "The Top 10 Ways to Avoid Being Audited" and offer it to the local newspapers several weeks prior to tax season. Similarly, a roofing company may offer radio stations a concise article titled "How to Avoid Ice Damage to Your Roof this Winter" shortly before the winter season.
Article marketing is a type of advertising in which businesses write short articles related to their respective industry. These articles are made available for distribution and publication in the marketplace. Each article has a bio box and byline (collectively known as the resource box) that include references and contact information for the author's business. Well-written content articles released for free distribution have the potential to increase the business credibility within its market. Also it helps in attracting new clients. These articles are often syndicated by other websites, and published on multiple websites.
Because websites are often complex, a term "content management" appeared in the late 1990s identifying a method or in some cases a tool to organize all the diverse elements to be contained on a website. [5] Content management often means that within a business there is a range of people who have distinct roles to do with content management, such as content author, editor, publisher, and administrator. But it also means there may be a content management system whereby each of the different roles are organized whereby to provide their assistance in operating the system and organizing the information for a website.
Even though a business may organize to collect, contain and represent that information online, content needs organization in such a manner to provide the reader (browser) with an overall "customer experience" that is easy to use, the site can be navigated with ease, and the website can fulfill the role assigned to it by the business, that is, to sell to customers, or to market products and services, or to inform customers.
The phrase can be interpreted to mean that - without original and desirable content, or consideration for the rights and commercial interests of content creators - any media venture is likely to fail through lack of appealing content, regardless of other design factors.
Content can mean any creative work, such as text, graphics, images or video.
"Content is King" is a current meme when organizing or building a website[4] (although Andrew Odlyzko in "Content is Not King" argues otherwise). Text content is particularly important for search engine placement. Without original text content, most search engines will be unable to match search terms to the content of a site.
While there are many millions of pages that are predominantly composed of HTML, or some variation, in general we view data, applications, E-Services, images (graphics), audio and video files, personal web pages, archived e-mail messages, and many more forms of file and data systems as belonging to websites and web pages.
While there are many hundreds of ways to deliver information on a website, there is a common body of knowledge of search engine optimization that needs to be read as an advisory of ways that anything but text should be delivered. Currently search engines are text based and are one of the common ways people using a browser locate sites of interest.
Even though we may embed various protocols within web pages, the "web page" composed of "html" (or some variation) content is still the dominant way whereby we share content. And while there are many web pages with localized proprietary structure (most usually, business websites), many millions of websites abound that are structured according to a common core idea.
Blogs are a type of website that contain mainly web pages authored in html (although the blogger may be totally unaware that the web pages are composed using html due to the blogging tool that may be in use). Millions of people use blogs online; a blog is now the new "home page", that is, a place where a persona can reveal personal information, and/or build a concept as to who this persona is. Even though a blog may be written for other purposes, such as promoting a business, the core of a blog is the fact that it is written by a "person" and that person reveals information from her/his perspective.
Search engine sites are composed mainly of html content, but also has a typically structured approach to revealing information. A search engine results page (SERP) displays a heading, usually the name of the search engine, and then a list of websites and their addresses. What is being listed are the results from a query that may be defined as keywords. The results page lists webpages that are connected in some way with those keywords used in the query.
Discussion boards are sites composed of "textual" content organized by html or some variation that can be viewed in a web browser. The driving mechanism of a discussion board is the fact that users are registered and once registered can write posts. Often a discussion board is made up of posts asking some type of question to which other users may provide answers to those questions.
Ecommerce sites are largely composed of textual material and embedded with graphics displaying a picture of the item(s) for sale. However, there are extremely few sites that are composed page-by-page using some variant of HTML. Generally, webpages are composed as they are being served from a database to a customer using a web browser. However, the user sees the mainly text document arriving as a webpage to be viewed in a web browser. Ecommerce sites are usually organized by software we identify as a "shopping cart".
Web content is dominated by the "page" concept. Having its beginnings in an academic settings, and in a setting dominated by type-written pages, the idea of the web was to link directly from one academic paper to another academic paper. This was a completely revolutionary idea in the late 1980s and early 1990s when the best a link could be made was to cite a reference in the midst of a type written paper and name that reference either at the bottom of the page or on the last page of the academic paper.
When it was possible for any person to write and own a Mosaic page, the concept of a "home page" blurred the idea of a page.[2] It was possible for anyone to own a "Web page" or a "home page" which in many cases the website contained many physical pages in spite of being called "a page". People often cited their "home page" to provide credentials, links to anything that a person supported, or any other individual content a person wanted to publish.
Even though "the web" may be the resource we commonly use to "get to" particular locations online, many different protocols[3] are invoked to access embedded information. When we are given an address, such as http://www.youtube.com, we expect to see a range of web pages, but in each page we have embedded tools to watch "video clips".
While the Internet began with a U.S. Government research project in the late 1950s, the web in its present form did not appear on the Internet until after Tim Berners-Lee and his colleagues at the European laboratory (CERN) proposed the concept of linking documents with hypertext. But it was not until Mosaic, the forerunner of the famous Netscape Navigator, appeared that the Internet become more than a file serving system.
The use of hypertext, hyperlinks and a page-based model of sharing information, introduced with Mosaic and later Netscape, helped to define web content, and the formation of websites. Largely, today we categorize websites as being a particular type of website according to the content a website contains.
Web content is the textual, visual or aural content that is encountered as part of the user experience on websites. It may include, among other things: text, images, sounds, videos and animations.
In Information Architecture for the World Wide Web, Lou Rosenfeld and Peter Morville write, "We define content broadly as 'the stuff in your Web site.' This may include documents, data, applications, e-services, images, audio and video files, personal Web pages, archived e-mail messages, and more. And we include future stuff as well as present stuff."[1]
Work licensed under a Creative Commons License is governed by applicable copyright law.[8] This allows Creative Commons licenses to be applied to all work falling under copyright, including: books, plays, movies, music, articles, photographs, blogs, and websites. Creative Commons does not recommend the use of Creative Commons licenses for software.[9]
However, application of a Creative Commons license may not modify the rights allowed by fair use or fair dealing or exert restrictions which violate copyright exceptions. Furthermore, Creative Commons Licenses are non-exclusive and non-revocable. Any work or copies of the work obtained under a Creative Commons license may continue to be used under that license.
In the case of works protected by multiple Creative Common Licenses, the user may choose either.
Since 2004,[4] all current licenses require attribution of the original author. The attribution must be given to "the best of [one's] ability using the information available".[7] Generally this implies the following:
  • Include any copyright notices (if applicable). If the work itself contains any copyright notices placed there by the copyright holder, those notices must be left intact, or reproduced in a way that is reasonable to the medium in which the work is being re-published.
  • Cite the author's name, screen name, or user ID, etc. If the work is being published on the Internet, it is nice to link that name to the person's profile page, if such a page exists.
  • Cite the work's title or name (if applicable), if such a thing exists. If the work is being published on the Internet, it is nice to link the name or title directly to the original work.
  • Cite the specific CC license the work is under. If the work is being published on the Internet, it is nice if the license citation links to the license on the CC website.
  • Mention if the work is a derivative work or adaptation, in addition to the above, one needs to identify that their work is a derivative work i.e., “This is a Finnish translation of [original work] by [author].” or “Screenplay based on [original work] by [author].”
Mixing and matching these conditions produces sixteen possible combinations, of which eleven are valid Creative Commons licenses and five are not. Of the five invalid combinations, four include both the "nd" and "sa" clauses, which are mutually exclusive; and one includes none of the clauses. Of the eleven valid combinations, the five that lack the "by" clause have been retired because 98% of licensors requested attribution, though they do remain available for reference on the website.[3][4][5] This leaves six regularly used licenses:
  1. Attribution alone (by)
  2. Attribution + Noncommercial (by-nc)
  3. Attribution + NoDerivatives (by-nd)
  4. Attribution + ShareAlike (by-sa)
  5. Attribution + Noncommercial + NoDerivatives (by-nc-nd)
  6. Attribution + Noncommercial + ShareAlike (by-nc-sa)
For example, the Creative Commons Attribution (BY) license allows one to share and remix (create derivative works), even for commercial use, so long as attribution is given.[6]
The original set of licenses all grant the "baseline rights", such as the right to distribute the copyrighted work worldwide, without changes, at no charge.[2] The details of each of these licenses depends on the version, and comprises a selection of four conditions:
Attribution Attribution (by) Licensees may copy, distribute, display and perform the work and make derivative works based on it only if they give the author or licensor the credits in the manner specified by these.
Non-commercial Noncommercial (nc) Licensees may copy, distribute, display, and perform the work and make derivative works based on it only for noncommercial purposes.
Non-derivative No Derivative Works (nd) Licensees may copy, distribute, display and perform only verbatim copies of the work, not derivative works based on it.
Share-alike Share-alike (sa) Licensees may distribute derivative works only under a license identical to the license that governs the original work. (See also copyleft.)
Creative Commons licenses are several copyright licenses that allow the distribution of copyrighted works. The licenses differ by several combinations that condition the terms of distribution. They were initially released on December 16, 2002 by Creative Commons, a U.S. non-profit corporation founded in 2001.
As of July, 2011, Creative Commons licenses have been "ported" over 50 different jurisdictions worldwide. No new ports are being started as preparations for version 4.0 of the license suite begin.[1]
A table of contents, usually headed simply "Contents" and abbreviated informally as TOC, is a list of the parts of a book or document organized in the order in which the parts appear. The contents usually includes the titles or descriptions of the first-level headers, such as chapter titles in longer works, and often includes second-level or section titles (A-heads) within the chapters as well, and occasionally even third-level titles (subsections or B-heads). The depth of detail in tables of contents depends on the length of the work, with longer works having less. Formal reports (ten or more pages and being too long to put into a memo or letter) also have table Within an English-language book, the table of contents usually appears after the title page, copyright notices, and, in technical journals, the abstract; and before any lists of tables or figures, the foreword, and the preface.
Printed tables of contents indicate page numbers where each part starts, while online ones offer links to go to each part. The format and location of the page numbers is a matter of style for the publisher. If the page numbers appear after the heading text, they might be preceded by characters called leaders, usually dots or periods, that run from the chapter or section titles on the opposite side of the page, or the page numbers might remain closer to the titles. In some cases, the page number appears before the text.
If a book or document contains chapters, articles, or stories by different authors, the author's name also usually appears in the table of contents.
In some cases, tables of contents contains a high quality description of the chapter's but usually first-level header's section content rather than subheadings.
Matter preceding the table of contents is generally not listed there. However, all pages except the outside cover are counted, and the table of contents is often numbered with a lowercase Roman numeral page number. Many popular word processors, such as Microsoft Word, WordPerfect, and StarWriter are capable of automatically generating a table of contents if the author of the text uses specific styles for chapter titles, headings, subheadings, etc.
There is one on this very page above.
Open content is a neologism coined by David Wiley in 1998 [1] which draws an analogy between open source practices and the publishing of content online.[2] Open content describes thus any kind of creative work, or content, published under an open content license that explicitly allows copying and modifying of its information by anyone, not exclusively by a single organization, firm or individual.
Open content is an alternative paradigm to the use of copyright to create monopolies; rather than leading to monopoly, open content facilitates the democratization of knowledge.[3]
The term open content is also used to describe works that would be more correctly described as open access. An open access work is available to everyone whereas an open content work can be copied and adapted by anyone.[4]
Widespread adoption of the Internet has made it feasible to distribute hitherto inaccessible government documentation directly to citizens from any location for minimal cost. This allows information on lawmaking, local and state government to be analysed by a government's constituents. Although previously information has been in the form of media releases for public relations purposes, documentation that may be of use to citizens and businesses has, in some jurisdictions, been mandated to be released by default.[35] This is in contrast to laws such as the freedom of information act, or their local equivalent, which may make documentation available only on request, rather than mandate explicit publication. According to the Journal of Public Administration, such a stance has been cited as an aid to the reduction in complexity associated with government processes, as well as aiding a reduction in corruption.[36]
In academic work, free works are still a niche phenomenon, owing to the difficulty and cost of maintaining a fully qualified peer review process. Authors may see open access publishing as a method of expanding the audience that is able to access their work to allow for greater impact of the publication, or for ideological reasons.[21][22][23] Groups such as the Public Library of Science and Biomed Central provide capacity for review and publishing of free works; though such publications tend to be limited to fields such as life sciences. Some universities, such as the Massachusetts Institute of Technology (MIT), have adopted open access publishing by default.[24] In traditional journals, alternatives such as delayed free publications or charging researchers for open access publishing are occasionally used.[25][26] Some funding agencies, such as the National Institutes of Health, require academic work to be published in the public domain as a grant requirement.[27][28] Open content publication has been seen as a method of reducing costs associated with information retrieval in research, as universities typically pay to subscribe for access to content that is published through traditional means[29][30][31] whilst improving journal quality by discouraging the submission of research articles of reduced quality.[31]
Subscriptions for non-free content journals may be expensive to universities themselves, particularly noteworthy when coupled to the fact that the content in the scientific articles are generated and peer-reviewed by the university staff themselves at no cost to the publisher. This has led to disputes between publishers and some universities over subscription costs, such as the one which occurred between the University of California and the Nature Publishing Group.[32][33]
For teaching purposes, some universities, including MIT, provide freely available course content, such as lecture notes, video resources and tutorials. This content is distributed via internet resources to the general public. Publication of such resources may be either by a formal institution-wide program,[34] or alternately via informal content provided by individual academics or departments.
Free content principles have been translated into fields such as engineering, where designs and engineering knowledge can be readily shared and duplicated, in order to reduce overheads associated with project development. Open design principles can be applied in engineering and technological applications, with projects in mobile telephony, small-scale manufacture,[17] the automotive industry,[18][19] and even agricultural areas.[20]
Technologies such as distributed manufacturing can allow computer-aided manufacturing and computer-aided design techniques to be able to develop small-scale production of components for the development of new, or repair of existing, devices. Rapid fabrication technologies underpin these developments, which allow end users of technology to be able to construct devices from pre-existing blueprints, using software and manufacturing hardware to convert information into physical objects.
Free software, often referred to as open source software, is a maturing technology with major companies utilising free software to provide both services and technology to both end users and technical consumers. The ease of dissemination has allowed for increased modularity, which allows for smaller groups to contribute to projects as well as simplifying collaboration.
Open source development models have been classified as having a similar peer-recognition and collaborative benefit incentives that are typified by more classical fields such as scientific research, with the social structures that result from this incentive model decreasing production cost.[15]
Given sufficient interest in a given software component, by using peer-to-peer distribution methods, distribution costs of software may be minimized, removing the burden of infrastructure maintenance from developers. As distribution resources are simultaneously provided by consumers, these software distribution models are scalable, that is the method is feasible regardless of the number of consumers. In some cases, free software vendors may use peer-to-peer technology as a method of dissemination.[16]
In general, project hosting and code distribution is not a problem for the most of free projects as a number of providers offer them these services for free.
In media, which includes textual, audio, and visual content, free licensing schemes such as some of the licenses made by Creative Commons have allowed for the dissemination of works under a clear set of legal permissions. Not all of the Creative Commons’ licenses are entirely free: their permissions may range from very liberal general redistribution and modification of the work to a more restrictive redistribution-only licensing. Since February 2008, Creative Commons licenses which are entirely free carry a badge indicating that they are "approved for free cultural works".[12] Repositories exist which exclusively feature free material provide content such as photographs, clip art, music,[13] and literature,.[14]
While extensive reuse of free content from one website in another website is legal, it is usually not sensible because of the duplicate content problem. Website that is largely an exact copy of another website ranks way lower in search engines, so every successful project tries to present something different.
Projects that provide free content exist in several areas of interest, such as software, academic literature, general literature, music, images, video, and engineering.
Technology has reduced the cost of publication and reduced the entry barrier sufficiently to allow for the production of widely disseminated materials by individuals or small groups. Projects to provide free literature and multimedia content have become increasingly prominent owing to the ease of dissemination of materials that is associated with the development of computer technology. Such dissemination may have been too costly prior to these technological developments.
Copyfree is a play on the word copyleft as well as the word copyright, describing a practice that contrasts both of them of using copyright law to remove restrictions on distributed copies and modified versions of a work imposed by both copyleft licensing and copyright itself. Where copyleft licensing generally requires that all derivative works be distributed under the terms of the same license, copyfree licensing generally requires only that the original work and direct modifications of it continue to be distributed under the terms of the same license.[9] The Copyfree Initiative maintains the Copyfree Standard Definition[10], which establishes a specification to qualify a license for Copyfree Initiative certification of a copyfree license.
A symbol commonly associated with copyfree policy is a modification of the copyright symbol, replacing the C with a capital F to produce the copyfree logo.[11]
Copyleft is a play on the word copyright and describes the practice of using copyright law to remove restrictions on distributing copies and modified versions of a work.[7] The aim of copyleft is to use the legal framework of copyright to enable non-author parties to be able to reuse and, in many licensing schemes, modify content that is created by an author. Unlike works in the public domain, the author still maintains copyright over the material, however the author has granted a non-exclusive license to any person to distribute, and often modify, the work. Copyleft licenses require that any derivative works be distributed under the same terms, and that the original copyright notices be maintained. A symbol commonly associated with copyleft is a reversal of the copyright symbol, facing the other way; the opening of the C points left rather than right. Unlike the copyright symbol, the copyleft symbol does not have a codified meaning.[8]
The public domain is a range of creative works whose copyright has expired, or was never established; as well as ideas and facts[nb 1] which are ineligible for copyright. A public domain work is a work whose author has either relinquished to the public, or no longer can claim control over, the distribution and usage of the work. As such any person may manipulate, distribute, or otherwise utilize the work, without legal ramifications. A work in the public domain or released under a permissive licence may be referred to as "copycenter".[6]
Copyright is a legal concept, which grants the author or creator of a work legal rights to control the duplication and public performance of his or her work. In many jurisdictions, this is limited by a time period after which the works then enter the public domain. During the time period of copyright the author's work may only be copied, modified, or publicly performed with the consent of the author, unless the use is a fair use. Traditional copyright control, limits the use of the work of the author to those who can, or are willing to, afford the payment of royalties to the author for usage of the authors content, or limit their use to fair use. Secondly it limits the use of content whose author cannot be found.[4] Finally it creates a perceived barrier between authors by limiting derivative works, such as mashups and collaborative content[5]
Free content, or free information, is any kind of functional work, artwork, or other creative content that meets the definition of a free cultural work.[1] A free cultural work is one which has no significant legal restriction on people's freedom:
  • to use or modify the content,
  • to distribute copies of the content,
  • to distribute works derived from the content.[2]
Although different definitions are used, free content is legally similar if not identical to open content. An analogy is the use of the rival terms free software and open source which describe ideological differences rather than legal ones.
Free content encompasses all works in the public domain and also those copyrighted works whose licenses honor and uphold the freedoms mentioned above. Because copyright law in most countries by default grants copyright holders monopolistic control over their creations, copyright content must be explicitly declared free, usually by the referencing or inclusion of licensing statements from within the work.
Though a work which is in the public domain because its copyright has expired is considered free, it can become non-free again if the copyright law changes.[3]
A value-added service (VAS) is popular as a telecommunications industry term for non-core services, or in short, all services beyond standard voice calls and fax transmissions. However, it can be used in any service industry, for services available at little or no cost, to promote their primary business. In the telecommunication industry, on a conceptual level, value-added services add value to the standard service offering, spurring the subscriber to use their phone more and allowing the operator to drive up their ARPU. For mobile phones, while technologies like SMS, MMS and GPRS are usually considered value-added services, a distinction may also be made between standard (peer-to-peer) content and premium-charged content. These are called mobile value-added services (MVAS) which are often simply referred as VAS.
Value-added services are supplied either in-house by the mobile network operator themselves or by a third-party value-added service provider (VASP), also known as a content provider (CP) such as All Headline News or Reuters.
VASPs typically connect to the operator using protocols like Short message peer-to-peer protocol (SMPP), connecting either directly to the short message service centre (SMSC) or, increasingly, to a messaging gateway that gives the operator better control of the content.

kunjungan