Kolom Iklan

Senin, 08 September 2008

Using Peer-to-Peer Data Routing for Infrastructure-based Wireless Networks

Authors: Sethuram Balaji Kodeswaran, Olga Vladi Ratsimor, Anupam Joshi, Tim Finin, and Yelena Yesha

Book Title: First IEEE International Conference on Pervasive Computing and Communications

Date: March 18, 2003

Abstract: A mobile ad-hoc network is an autonomous system of mobile routers that are self-organizing and completely decentralized with no requirements for dedicated infrastructure support. Wireless Infrastructure in terms of base stations is often available in many popular areas offering high-speed data connectivity to a wired network. In this paper, we describe an approach where infrastructure components utilize passing by mobile nodes to route data to other devices that are out of range. In our scheme, base stations track user mobility and determine data usage patterns of users as they pass by. Based on this, base stations predict the future data needs for a passing mobile device. These base stations then collaborate (over the wired network) to identify other mobile devices with spare capacity whose routes intersect that of a needy device and use these carriers to transport the needed data. When such a carrier meets a needy device, they form ad hoc peer-to-peer communities to transfer this data. In this paper, we describe the motivation behind our approach and the different component interactions. We present the results of simulation work that we have done to validate the viability of our approach. We also describe, Numi, our framework for supporting collaborative infrastructure and ad hoc computing along with a sample application built on top of this highlighting the benefits of our proposed approach.


What's New in Firefox 3

Firefox 3 Beta 2 is a developer preview release of Mozilla's next generation Firefox browser and is being made available for testing purposes only.

These beta releases are targeted to Web developers and our testing community to gain feedback before advancing to the next stage in the release process. The final version of Firefox 3 will be released when we qualify the product as fully ready for our users. Users of the latest released version of Firefox should not expect their add-ons to work properly with this beta.

Much of the work leading up to this beta has been around developing the infrastructure to support a bunch of exciting new features. With this second beta, you'll get a taste of what's coming in Firefox 3, but there's still more to come, and much of what you'll see is still a bit rough around the edges.

Please see below for an extensive list of features and enhancements found in Firefox 3 Beta 2, as well as known issues and frequently asked questions.

As always, we appreciate your feedback either through this feedback form or by filing a bug in Bugzilla.

What's New in Firefox 3

Firefox 3 is based on the new Gecko 1.9 Web rendering platform, which has been under development for the past 28 months and includes nearly 2 million lines of code changes, fixing more than 11,000 issues. Gecko 1.9 includes some major re-architecting for performance, stability, correctness, and code simplification and sustainability. Firefox 3 has been built on top of this new platform resulting in a more secure, easier to use, more personal product with a lot under the hood to offer website and Firefox add-on developers.

[Improved in Beta 2!] Firefox 3 Beta 2 includes approximately 900 improvements over the previous beta, including fixes for stability, performance, memory usage, platform enhancements and user interface improvements. Many of these improvements were based on community feedback from the previous beta.

More Secure
  • One-click site info: Click the site favicon in the location bar to see who owns the site. Identity verification is prominently displayed and easier to understand. In later versions, Extended Validation SSL certificate information will be displayed.
  • Malware Protection: malware protection warns users when they arrive at sites which are known to install viruses, spyware, trojans or other malware. You can test it here (note: our blacklist of malware sites is not yet activated).
  • New Web Forgery Protection page: the content of pages suspected as web forgeries is no longer shown. You can test it here.
  • New SSL error pages: clearer and stricter error pages are used when Firefox encounters an invalid SSL certificate.
  • Add-ons and Plugin version check: Firefox now automatically checks add-on and plugin versions and will disable older, insecure versions.
  • Secure add-on updates: to improve add-on update security, add-ons that provide updates in an insecure manner will be disabled.
  • Anti-virus integration: Firefox will inform anti-virus software when downloading executables.
  • Vista Parental Controls: Firefox now respects the Vista system-wide parental control setting for disabling file downloads.
  • [Improved in Beta 2!] Effective top-level domain (eTLD) service better restricts cookies and other restricted content to a single domain.
  • [Improved in Beta 2!] Better protection against cross-site JSON data leaks.
Easier to Use
  • Easier password management: an information bar replaces the old password dialog so you can now save passwords after a successful login.
  • Simplified add-on installation: the add-ons whitelist has been removed making it possible to install extensions from third-party sites in fewer clicks.
  • [Improved in Beta 2!] New Download Manager: the revised download manager makes it much easier to locate downloaded files, and displays where a file came from.
  • Resumable downloading: users can now resume downloads after restarting the browser or resetting your network connection.
  • Full page zoom: from the View menu and via keyboard shortcuts, the new zooming feature lets you zoom in and out of entire pages, scaling the layout, text and images.
  • Tab scrolling and quickmenu: tabs are easier to locate with the new tab scrolling and tab quickmenu.
  • Save what you were doing: Firefox will prompt users to save tabs on exit.
  • Optimized Open in Tabs behavior: opening a folder of bookmarks in tabs now appends the new tabs rather than overwriting.
  • Location and Search bar size can now be customized with a simple resizer item.
  • Text selection improvements: multiple text selections can be made with Ctrl/Cmd; double-click drag selects in "word-by-word" mode; triple-clicking selects a paragraph.
  • Find toolbar: the Find toolbar now opens with the current selection.
  • Plugin management: users can disable individual plugins in the Add-on Manager.
  • Integration with Vista: Firefox's menus now display using Vista's native theme.
  • Integration with the Mac: Firefox now uses the OS X spellchecker and supports Growl for notifications of completed downloads and available updates.
  • [Improved in Beta 2!] Integration with Linux: Firefox's default icons, buttons, and menu styles now use the native GTK theme.
More Personal
  • Star button: quickly add bookmarks from the location bar with a single click; a second click lets you file and tag them.
  • Tags: associate keywords with your bookmarks to sort them by topic.
  • [Improved in Beta 2!] Location bar & auto-complete: type in all or part of the title, tag or address of a page to see a list of matches from your history and bookmarks; a new display makes it easier to scan through the matching results and find that page you're looking for.
  • [Improved in Beta 2!] Smart Bookmarks Folder: quickly access your recently bookmarked and tagged pages, as well as your more frequently visited pages with the new smart bookmarks folder on your bookmark toolbar.
  • [Improved in Beta 2!] Places Organizer: view, organize and search through all of your bookmarks, tags, and browsing history with multiple views and smart folders to store your frequent searches.
  • [Improved in Beta 2!] Web-based protocol handlers: web applications, such as your favorite webmail provider, can now be used instead of desktop applications for handling mailto: links from other sites. Similar support is available for other protocols (Web applications will have to first enable this by registering as handlers with Firefox).
  • Easy to use Download Actions: a new Applications preferences pane provides a better UI for configuring handlers for various file types and protocol schemes.
Improved Platform for Developers
  • New graphics and font handling: new graphics and text rendering architectures in Gecko 1.9 provides rendering improvements in CSS, SVG as well as improved display of fonts with ligatures and complex scripts.
  • Native Web page forms: HTML forms on Web pages now have a native look and feel on Mac OS X and Linux (Gnome) desktops.
  • Color management: (set gfx.color_management.enabled on in about:config and restart the browser to enable.) Firefox can now adjust images with embedded color profiles.
  • Offline support: enables web applications to provide offline functionality (website authors must add support for offline browsing to their site for this feature to be available to users).
  • A more complete overview of Firefox 3 for developers is available for website and add-on developers.
Improved Performance
  • Reliability: A user's bookmarks, history, cookies, and preferences are now stored in a transactionally secure database format which will prevent data loss even if their system crashes.
  • [Improved in Beta 2!] Speed: Major architectural changes (such as the move to Cairo and a rewrite to how reflowing a page layout works) put foundations in place for major performance tuning which have resulted in speed increases in Beta 2, and will show further gains in future Beta releases.
  • [Improved in Beta 2!] Memory usage: Over 300 individual memory leaks have been plugged, and a new XPCOM cycle collector completely eliminates many more. Developers are continuing to work on optimizing memory use (by releasing cached objects more quickly) and reducing fragmentation. Beta 2 includes over 30 more memory leak fixes, and 11 improvements to our memory footprint.

Firefox 3 Beta Release Notes

Firefox 3 Beta 2 is a developer preview release of Mozilla's next generation Firefox browser and is being made available for testing purposes only.

These beta releases are targeted to Web developers and our testing community to gain feedback before advancing to the next stage in the release process. The final version of Firefox 3 will be released when we qualify the product as fully ready for our users. Users of the latest released version of Firefox should not expect their add-ons to work properly with this beta.

Much of the work leading up to this beta has been around developing the infrastructure to support a bunch of exciting new features. With this second beta, you'll get a taste of what's coming in Firefox 3, but there's still more to come, and much of what you'll see is still a bit rough around the edges.

Please see below for an extensive list of features and enhancements found in Firefox 3 Beta 2, as well as known issues and frequently asked questions.

As always, we appreciate your feedback either through this feedback form or by filing a bug in Bugzilla.

Are the Browser Wars Back?How Mozilla's Firefox trumps Internet Explorer.

I usually don't worry about PC viruses, but last week's Scob attack snapped me awake. The clever multi-stage assault, carried out by alleged Russian spam crime lords, infiltrated corporate Web servers and then used them to infect home computers. The software that Scob (also known as Download.ject) attempted to install on its victims' machines included a keystroke logger.

In less than a day, Internet administrators sterilized the infection by shutting down the Russian server that hosted the spyware. But not before a barrage of scary reports had circled the world. "Users are being told to avoid using Internet Explorer until Microsoft patches a serious security hole," the BBC warned. (Disclosure: Microsoft owns Slate.) CNET reporter Robert Lemos zeroed in on why the attack was so scary. "This time," he wrote, "the flaws affect every user of Internet Explorer." That's about 95 percent of all Net users. No matter how well they had protected themselves against viruses, spyware, and everything else in the past, they were still vulnerable to yet another flaw in Microsoft's browser.

Scob didn't get me, but it was enough to make me ditch Explorer in favor of the much less vulnerable Firefox browser. Firefox is built and distributed free by the Mozilla Organization, a small nonprofit corporation spun off last year from the fast-fading remnants of Netscape, which was absorbed by AOL in 1999. Firefox development and testing are mostly done by about a dozen Mozilla employees, plus a few dozen others at companies like IBM, Sun, and Red Hat. I've been using it for a week now, and I've all but forgotten about Explorer.



You've probably been told to dump Internet Explorer for a Mozilla browser before, by the same propeller-head geek who wants you to delete Windows from your hard drive and install Linux. You've ignored him, and good for you. Microsoft wiped out Netscape in the Browser Wars of the late 1990s not only because the company's management pushed the bounds of business ethics, but also because its engineers built a better browser. When Netscape CEO Jim Barksdale approved the Mozilla project—an open-source browser based on Netscape's code—in 1998, it seemed like a futile act of desperation.

But six years later, the surviving members of the Mozilla insurgency are staging a comeback. The latest version of Firefox, released this Monday, has a more professional look, online help, and a tool that automatically imports your bookmarks, history, site passwords, and other settings from Explorer. Meanwhile, all-conquering Internet Explorer has been stuck in the mud for the past year, as Microsoft stopped delivering new versions. The company now rolls out only an occasional fix as part of its Windows updates. Gates and company won the browser war, so why keep fighting it?

The problem is that hackers continue to find and exploit security holes in Explorer. Many of them take advantage of Explorer's ActiveX system, which lets Web sites download and install software onto visitors' computers, sometimes without users' knowledge. ActiveX was meant to make it easy to add the latest interactive multimedia and other features to sites, but instead it's become a tool for sneaking spyware onto unsuspecting PCs. That's why the U.S. Computer Emergency Readiness Team, a partnership between the tech industry and Homeland Security, recently took the unusual step of advising people to consider switching browsers. Whether or not you do, US-CERT advises increasing your Internet Explorer security settings, per Microsoft's instructions. (Alas, the higher setting disables parts of Slate's interface.) Even if you stop using Explorer, other programs on your computer may still automatically launch it to connect to sites.

Firefox eschews ActiveX and other well-known infection paths. You can configure it to automatically download most files when you click on them, but not .exe files, which are runnable programs. I thought this was a bug before I realized Firefox was saving me from myself, since .exe files could be viruses or stealth installers.

For actual Web surfing, Firefox's interface is familiar enough to Explorer users. There's hardly anything to say about it, which is a compliment. Some interactive features designed exclusively for Internet Explorer won't appear, such as the pop-up menus on Slate's table of contents. A few sites don't display properly, but they're pretty rare. More common are those that stupidly turn non-Explorer browsers away by claiming they're "unsupported." Trusty, useful ActiveX-powered sites such as Windows Update don't load at all, but that's the idea. You can always launch Internet Explorer for those when you need to.

Firefox also adds a productivity feature that Explorer has never gotten around to: tabbed browsing. You can open several Web pages in the same window and flip through them as tabs, similar to those used in some of Windows' dialog boxes. It's tough to understand why tabbed browsing is such an improvement until you've tried it. But if you're in the habit of opening a barrage of news and blog links every morning and then reading them afterward, or clicking on several Google results from the same search, tabbed browsing is an order of magnitude more efficient and organized than popping up a whole new window for each link.

That said, be aware that getting started with Firefox isn't a one-click operation. After installing the browser, you'll need to reinstall plug-ins for some programs, as well as Sun's Java engine for any Java-powered pages. Let me save you an hour of head-scratching here: Save Sun's Java installation file to your desktop, then go back to Firefox's menus and select File -> Open File to install the downloaded .xpi file into the browser. That'll work where other methods fail without explanation.

Once you're set up, it still takes a day or two to get used to the interface and feature differences between Explorer and Firefox, as well as the fact that your favorite sites may look a little different. That's why I left it out of Slate's 20-minute anti-virus plan. But if you've got time to make the switch, the peace of mind is worth it. Mozilla also makes a free e-mail program called Thunderbird and a calendar tool called Sunbird, if you want to avoid using Outlook and Outlook Express, two other virus carriers. They're nowhere near as feature-packed as Outlook, but the e-mail client includes a spam filter that works pretty well after you train it on four or five thousand messages—in my case, one week's mail.

Will Firefox make your computer hackproof? Even Mozilla's spokespeople stress that no software can be guaranteed to be safe, and that Firefox's XPInstall system could conceivably be tricked into installing a keystroke logger instead of Sun's Java engine. But for now, there's safety in numbers—the lack of them, that is. Internet Explorer is used by 95 percent of the world. Firefox's fan base adds up to 2 or 3 percent at most. Which browser do you think the Russian hackers are busily trying to break into again?


Jumat, 15 Agustus 2008

ARTI TAKWA ADALAH CINTA

Prof. Dr. H. Nasaruddin Umar, M.A.



Musibah ada di mana-mana dan bisa terjadi kapan saja. Yang penting bagi kita ialah bagaimana menyikapi musibah itu seandainya ia datang menimpa kita atau anggota keluarga kita. Misalnya hujan. Curah hujan yang melebihi batas normal kadang-kadang tidak lagi berfungsi sebagai rahmat, tetapi bisa menjadi suatu laknat, apalagi jika mendatangkan banjir. Dengan demikian, ini bisa disebut dengan musibah kecil. Seperti kita tahu, musibah atau kesulitan-kesulitan hidup itu sisi lain dari kehendak Tuhan untuk menyapa hamba-Nya. Seolah-olah Allah merindukan hamba-Nya, sehingga Dia merindukan mereka dalam bentuk musibah. Musibah adalah sebentuk surat cinta Tuhan kepada kekasih-Nya.

Kenapa musibah disebut surat cinta? Karena mungkin pada suatu saat, seseorang itu tidak sanggup untuk mendekati Tuhan, terlena dengan kemewahan duniawi yang ada pada dirinya, sehingga tertutup pintu batinnya, tidak lagi sensitif dan tidak lagi ada kerinduan terhadap Tuhannya. Seringkali kerinduan terhadap Tuhan itu muncul manakala dipancing oleh hadirnya musibah. Seringkali tanpa musibah, seseorang lupa terhadap Allah Swt. Pengalaman-pengalaman yang mengecewakan, seperti adanya gangguan-gangguan yang menghambat normalitas kehidupan kita, harus dimaknai bahwa ini adalah cara Tuhan untuk mengingatkan kita.

Allah berfirman: Hai orang-orang yang beriman, bertakwalah kepada Allah sebenar-benar takwa kepada-Nya; dan janganlah sekali-kali kamu mati melainkan dalam keadaan beragama Islam. (QS. Ali Imrân [3]: 102)

Ayat ini merupakan panggilan khusus untuk orang yang beriman. Mereka diminta untuk bertakwa dengan sebenar-benarnya takwa, dengan puncak takwa. Meskipun di dalam ayat lain dikatakan, "Bertakwalah kepada Allah sebatas kemampuanmu." (QS. At-Taghabûn [64]: 16). Allah Maha adil. Kalau seandainya standar ketakwaan itu memakai standar ketakwaan Rasulullah, atau para aulia, kita sebagai orang awam tentu sulit untuk mencapainya. Tetapi Allah Mahatahu bahwa tidak semua hamba-Nya sama pengalaman batinnya dan tingkat makrifahnya.

Apa perbedaan ayat pertama dan ayat kedua?

Ayat pertama meminta kita untuk bersikap maksimal mewujudkan ketakwaan itu di dalam diri kita. Tetapi kalau kita sudah berusaha, dan ternyata masih jauh dari standar ketakwaan itu, jangan khawatir karena ada firman Allah yang lain, "Bertakwalah sebatas kemampuanmu." Jangan berkecil hati kalau ketakwaan kita tidak bisa menyamai Rasulullah dan para wali. Yang penting kita sudah berusaha sekuat kemampuan kita, kemudian terimalah apa adanya diri kita.

Apa yang dimaksud dengan takwa?

Banyak orang mengartikan takwa sebagai takut terhadap Allah. Sebetulnya terjemahan ini tidak sepenuhnya tepat. Memang salah satu pengertian takwa adalah takut, tapi itu hanya kira-kira 30% benarnya. Takut hanya salah satu komponen dari takwa, tetapi komponen terbesarnya bukan takut. Komponen yang sangat penting dari takwa adalah cinta kepada Allah. Di kalangan sufi, takwa itu diartikan dengan cinta terhadap Allah. Di kalangan fukaha, takwa itu adalah takut terhadap Allah. Kombinasi antara takwa dan takut, itulah pengertian takwa yang ideal bagi kita.

Sesungguhnya Allah Swt. bukanlah sosok yang sangat mengerikan sehingga kita harus takut terhadap-Nya. Melainkan Allah Swt. adalah sosok yang Mahaindah untuk dicintai, sosok yang Maha Pengasih, sosok yang Mahalembut. Dengan demikian, takwa itu di satu sisi kita takut dan segan kepada Allah, di sisi lain, kita mencintai-Nya.

Miniatur sikap kita terhadap Allah itu persis sikap kita terhadap kedua orang tua kita. Di satu sisi kita segan dan takut terhadap orang tua, pada sisi lain kita juga butuh dan cinta terhadap mereka. Sekalipun kita dimarahi, sekalipun kita dipukul, tetap orang yang paling kita cintai adalah kedua orang tua kita. Sekalipun Tuhan menurunkan musibah, sekalipun Tuhan sering menguji kita, tetapi yang kita cintai hanya Allah Swt. Inilah pengertian kongkret yang bisa kita ukur dari pengertian takwa.

Bertakwalah kepada Allah Swt., artinya takutilah dan cintailah Allah Swt. Kadang-kadang Allah tampil sebagai sosok yang Mahabesar untuk ditakuti, terutama bagi para pendosa. Bagi orang yang baru saja melakukan dosa, di situ Tuhan akan tampil sebagai Yang Maha adil, Yang Maha Penghukum, bahkan Yang Maha Penyiksa, sehingga orang yang berdosa menjadi ciut nyalinya dan tidak berani lagi melakukan dosa.

Tapi Allah Swt. akan tampil sebagai sosok yang Maha Mencinta di hadapan orang yang melakukan ibadah. Orang yang melakukan ibadah dan kebaikan-kebaikan dengan ikhlas, tidak usah takut terhadap Allah. Baginya, yang paling tepat adalah mencintai Allah.

Siapapun orang yang beriman, yang berdosa pasti akan merasa takut kepada Allah. Dan siapapun orang yang beriman, yang telah beribadah dengan sungguh-sungguh dan ikhlas, pasti ada muncul rasa cinta di dalam dirinya kepada Allah dan harapan yang besar untuk mendapatkan cinta-Nya. Dengan demikian, pola relasi manusia dan Tuhan adalah pola relasi takut dan cinta. Inilah Islam.

Agama-agama lain tunggal pola relasinya, dan umumnya mengandalkan pola relasi takut kepada Tuhannya atau dewa-dewanya. Itulah sebabnya dalam agama lain diperlukan mediasi antara manusia dan Tuhannya atau dewanya. Bahkan ada yang menggambarkan dewanya dengan gambaran yang mengerikan. Kalau perlu dibuatkan patungnya dengan sosok yang besar, wajah yang angker, taringnya mencuat, bahkan membawa alat pemukul (gada). Supaya apa? Itu sebagai mediasi agar jiwa si penyembah takut kepada yang disembahnya. Semakin takut, semakin tinggi kedekatannya dengan Tuhan. Semakin takut, semakin hebat ibadahnya.

Dalam Islam tidak mesti seperti itu. Allah Swt. bukan sosok yang Maha Mengerikan untuk ditakuti, tapi lebih menonjol sebagai Tuhan Maha Penyayang untuk dicintai. Kalau pola relasi kita itu takut, kita akan menggambarkan Tuhan itu transenden, jauh sekali. Tapi kalau pola relasi cinta yang kita bangun, seolah-olah Tuhan itu amat dekat dengan diri kita. Firman Allah dalam Alquran: "Sesungguhnya Aku lebih dekat kepadanya daripada urat lehernya sendiri." (QS. Qâf [50]: 16)

Di kalangan sufi sering muncul pertanyaan, apakah Tuhan berada dalam diriku ataukah aku berada dalam diri Tuhan? Begitu dekatnya Tuhan dengan hamba, dan kedekatan ini polanya adalah relasi cinta. Hemat saya, inilah pola yang paling tepat bagi kita untuk mendekati Tuhan, yaitu pola relasi cinta. Pola relasi takut, bawaannya adalah formalitas, kering, dan kaku, serta sangat dipengaruhi oleh mood. Tapi pola relasi cinta lebih permanen sifatnya, segar, damai, dan menjanjikan harapan yang indah.

Maka berusahalah untuk lebih mencintai Tuhan, cinta dan cinta kepada Tuhan. Itulah takwa. Sehingga kalau berdoa pun, doanya seperti kaum sufi, "Ya Allah, aku menyembah Engkau bukan karena mengharap surga-Mu, dan aku meninggalkan maksiat bukan karena takut neraka-Mu. Masukkan aku ke neraka-Mu kalau aku menyembah-Mu karena takut neraka. Jauhkan aku dari surga-Mu jika aku menyembah-Mu karena ingin surga. Aku menyembah kepada-Mu, ya Allah, semata-mata karena cintaku yang sangat dalam kepada-Mu."

Luar biasa. Inilah nanti yang memancar dampaknya dalam masyarakat. Apapun yang kita lakukan, penuh dengan kedamaian. Termasuk saat tertimpa musibah atau kesedihan pun, hati kita akan tetap tenang dan ikhlas. Berjumpa dengan saudara, dengan kawan, tersenyum. Damai bawaannya. Kalau cinta terhadap Tuhan membara dalam diri setiap hamba, maka kedamaian antar sesama manusia pun akan tercipta. Insya Allah.

hakekat takwa

from:http://ustadzkholid.wordpress.com/2007/12/25/hakekat-takwa/

Takwa sangat penting dan dibutuhkan dalam setiap kehidupan seorang muslim. Namun masih banyak yang belum mengetahui hakekatnya. Setiap jum’at para khotib menyerukan takwa dan para makmumpun mendengarnya berulang-ulang kali. Namun yang mereka dengar terkadang tidak difahami dengan benar dan pas.Pengertian Takwa.Untuk mengenal hakekat takwa tentunya harus kembali kepada bahasa Arab, karena kata tersebut memang berasal darinya. Kata takwa (التَّقْوَى) dalam etimologi bahasa Arab berasal dari kata kerja (وَقَى) yang memiliki pengertian menutupi, menjaga, berhati-hati dan berlindung. Oleh karena itu imam Al Ashfahani menyatakan: Takwa adalah menjadikan jiwa berada dalam perlindungan dari sesuatu yang ditakuti, kemudian rasa takut juga dinamakan takwa. Sehingga takwa dalam istilah syar’I adalah menjaga diri dari perbuatan dosa.Dengan demikian maka bertakwa kepada Allah adalah rasa takut kepadaNya dan menjauhi kemurkaanNya. Seakan-akan kita berlindung dari kemarahan dan siksaanNya dengan mentaatiNya dan mencari keridhoanNya.Takwa merupakan ikatan yang mengikat jiwa agar tidak lepas control mengikuti keinginan dan hawa nafsunya. Dengan ketakwaan seseorang dapat menjaga dan mengontrol etika dan budi pekertinya dalam detiap saat kehidupannya karena ketakwaan pada hakekatnya adalah muroqabah dan berusaha keras mencapai keridhoan Allah serta takut dari adzabNya.Sangat pas sekali definisi para ulama yang menyatakan ketakwaan seorang hamba kepada Allah adalah dengan menjadikan benteng perlindungan diantara dia dengan yang ditakuti dari kemurkaan dan kemarahan Allah dengan melakukan ketaatan dan menjauhi kemaksiatan.Berikut ini beberapa ungkapan para ulama salaf dalam menjelaskan pengertian takwa:1. Kholifah yang mulia Umar bin Al Khothob pernah bertanya kepada Ubai bin Ka’ab tentang takwa. Ubai bertanya: Wahai amirul mukminin, Apakah engkau pernah melewati jalanan penuh duri? Beliau menjawab: Ya. Ubai berkata lagi: Apa yang engkau lakukan? Umar menjawab: Saya teliti dengan seksama dan saya lihat tempat berpijak kedua telapak kakiku. Saya majukan satu kaki dan mundurkan yang lainnya khawatir terkena duri. Ubai menyatakan: Itulah takwa.[1]2. Kholifah Umar bin Al Khothob pernah berkata: Tidak sampai seorang hamba kepada hakekat takwa hingga meninggalkan keraguan yang ada dihatinya.3. kholifah Ali bin Abi Tholib pernah ditanya tentang takwa, lalu beliau menjawab: Takut kepada Allah, beramal dengan wahyu (Al Qur’an dan Sunnah) dan ridho dengan sedikit serta bersiap-siap untuk menhadapi hari kiamat.4. Sahabat Ibnu Abas menyatakan: Orang yang bertakwa adalah orang yang takut dari Allah dan siksaanNya.5. Tholq bin Habib berkata: takwa adalah beramal ketaatan kepada Allah diatas cahaya dari Allah karena mengharap pahalaNya dan meninggalkan kemaksiatan diatas cahaya dari Allah karena takut siksaanNya6. ibnu Mas’ud menafsirkan firman Allah: اتَّقُواْ اللَّهَ حَقَّ تُقَاتِهِ dengan menyatakan: Taat tanpa bermaksiat dan ingat Allah tanpa melupakannya dan bersyukur.Takwa ada dikalbu.Takwa adalah amalan hati (kalbu) dan tempatnya di kalbu, dengan dasar firman Allah Ta’ala:Demikianlah (perintah Allah). Dan barangsiapa mengagungkan syi’ar-syi’ar Allah, maka sesungguhnya itu timbul dari ketaqwaan hati. (QS. 22:32) . dalam ayat ini takwa di sandarkan kepada hati, karena hakekat takwa ada dihati. Demikian juga firman Allah:Sesungguhnya orang-orang yang merendahkan suaranya di sisi Rasulullah mereka itulah orang-orang yang telah diuji hati mereka oleh Allah untuk bertaqwa. (QS. 49:3)Sedangkan dalil dari hadits Nabi n tentang hal ini adalah sabda beliau: التَّقْوَى هَهُنَا التَّقْوَى هَهُنَا التَّقْوَى هَهُنَا ويُشِيْرُ إِلَى صَدْرِهِ [ثَلاَثَ مَرَّاتٍ] بِحَسْبِ امْرِىءٍ مِنَ الشَّرِّ أَنْ يَحْقِرَ أَخَاهُ الْمُسْلِمَ كُلُّ اْلمُسْلِمِ عَلَى الْمُسْلِمِ حَرَامٌ دَمُّهُ وَعِرْضُهُ Takwa itu disini! Takwa itu disini! Takwa itu disini! –dan beliau mengisyaratkan ke dadanya (Tiga kali). Cukuplah bagi seorang telah berbuat jelek dengan merendahkan saudara muslimnya. Setiap muslim diharamkan atas muslim lainnya dalam darah, kehormatan dan hartanya. (HR Al Bukhori dan Muslim ). Juga hadits Qudsi yang masyhur dan panjang dari sahabat Abu Dzar. Diantara isinya adalah:يَا عِبَادِي لَوْ أَنَّ أَوَّلَكُمْ وَآخِرَكُمْ وَإِنْسَكُمْ وَجِنَّكُمْ كَانُوا عَلَى أَتْقَى قَلْبِ رَجُلٍ وَاحِدٍ مِنْكُمْ مَا زَادَ ذَلِكَ فِي مُلْكِي شَيْئًا Wahai hambaKu, seandainya seluruh kalian yang terdahulu dan yang akan datang, manusia dan jin seluruhnya berada pada ketakwaan hati seorang dari kalian tentulah tidak menambah hal itu sedikitpun dari kekuasaanKu. (HR Muslim)Dalam hadits ini ketakwaan disandarkan kepada tempatnya yaitu kalbu. Namun walaupun ketakwaan adalah amalan hati dan adanya dihati, tetap saja harus dibuktikan dan dinyatakan dengan amalan anggota tubuh. Siapa yang mengklaim bertakwa sedangkan amalannya menyelisihi perkataannya maka ia telah berdusta.Ketakwaan ini berbeda-beda sesuai kemampuan yang dimiliki setiap individu, sebagaimana firman Allah :فاتّقوا اللّهَ ما استَطَعتُمBertakwalah kepada Allah semampu kalian.Mudah-mudahan Allah memberikan kepada kita ketakwaan yang sempurna.



[1] Al Jaami’ Liahkam Al Qur’an karya Al Qurthubi 1/162

Jumat, 11 Juli 2008

Windows Vista

Vista Hardware Support

To help buyers identify hardware suitable for running Vista, Microsoft has a two-tier certification and logo program. The "Works with Windows Vista" logo provides assurance of basic Vista compatibility, and "Certified for Windows Vista" indicates that products specifically enable, or take advantage of, Vista features (such as Windows Aero). SLIDESHOW (74) 

Slideshow | All Shots



Vista supports new hardware in a variety of ways. The OS includes DirectX 10, supporting geometry shaders, graphics memory paging, graphics hardware virtualization, and other features that should enable ever-more-photorealistic games and simulations. (For our review of the first graphics chip and card ready to take advantage of DX10, go to go.pcmag.com/geforce8800.) Audio and printer driver architecture has changed as well, again with the goal of enhancing performance and stability. Vista also offers improved support for new varieties of peripherals and components, including Blu-ray and HD DVD devices.

Laptop and Tablet PC users get new goodies, too, without having to buy separate versions. New Tablet features include touch-screen support, improved pen navigation, gestures, and personalized handwriting recognition. And Media Center is now integral rather than packaged as a separate OS edition.

Vista's intriguing technology called SideShow lets devices with "auxiliary screens" show snippets of pertinent information even when the system isn't powered on. Imagine the Caller ID display on the outside of a clamshell cell phone, only more powerful and flexible. We're waiting for hardware that will let us test SideShow firsthand

from : http://www.pcmag.com/article2/0,2817,2089594,00.asp

Windows Vista

Vista Fundamentals

Some of an operating system's crucial responsibilities include managing hardware and drive storage and providing a set of APIs (application programming interfaces) that other software can rely on. And, indeed, some of Vista's most important enhancements lie beneath the surface. Many of these improvements are security related. We've written extensively about them, and you can get the latest in "Microsoft Locks Down Security...and Roils Security Vendors".

Networking is another revamped area. Vista's new TCP/IP stack includes native IPv6 support and auto-tuning via TCP window scaling. And it has better built-in Wi-Fi support.

Vista also has a number of performance enhancers. SuperFetch tracks frequently used programs and preloads them. ReadyBoost lets you use flash memory on a high-speed USB drive as a supplemental swap file (this can be substantially faster than a spinning hard drive). ReadyDrive supports hybrid hard drives with built-in flash-memory caches. There's also a low-priority I/O mechanism that lets programs such as Windows Defender run scans in the background with less disruption to foreground activity; and Vista automatically schedules drive defragmentation.

On the whole, my experience has been positive—on a screamer system. Others have had worse luck, particularly those who skimped on RAM. The SYSmark and MobileMark benchmark tests are currently being modified for testing Vista's performance; once they're up and running, we'll post performance results at go.pcmag.com/vistaspeed.

Vista's new sleep mode is supposed to make suspending and resuming faster and more reliable. With the machines I've been testing it on, I don't sense huge benefits from the new sleep mode. Whether that's due to Vista or to third-party hardware or drivers is hard to determine.

Microsoft also made a lot of more fundamental changes in the OS kernel, which provides low-level functions such as memory management, multi-processor synchronization, and I/O scheduling. Most are intended to help improve performance, security, and reliability.
Vista also extends the Windows API by incorporating the .NET 3.0 framework, giving developers capabilities that include Windows Presentation Foundation (formerly code-named Avalon), Windows Communication Framework (formerly Indigo), and Windows CardSpace (formerly InfoCard). But there's no WinFS (Windows Future Storage), the database-backed file system that was to be one of Vista's core innovations. As a result, Vista's support for tagging and relating files is less extensive than Microsoft promised back when the OS was still known by its code name, Longhorn.

Other additions are APIs to support RSS natively and a central RSS store. For example, if you subscribe to an RSS feed in Internet Explorer 7, the RSS reader Sidebar gadget automatically detects it

Windows Vista

from:http://www.pcmag.com/article2/0,2817,2088444,00.asp

by John Clyman

Windows Vista is here at last. One of the largest software projects ever undertaken, Vista is indisputably a milestone—despite Microsoft's having abandoned many of its most ambitious goals for the OS—and not just for Microsoft but for the entire PC industry.

Of course, Vista is not without its skeptics. PC makers say it will require more processing power, graphics capabilities, and memory than is typical of today's mainstream machines. Software vendors complain that Vista's vaunted security features are, in fact, locking them out. Users may wonder if it offers enough that's truly new to be worth the bother—particularly given that a number of Vista features and bundled applications are also available for Windows XP.

We've performed extensive, hands-on analysis of Vista and sorted out the claims to help you decide whether, or more realistically when, to make the move—and to show you what you can expect when you do

Windows Vista 

The Vista Promise

Microsoft calls Vista "a breakthrough computing experience." That's marketing hyperbole, for sure, but it's not entirely unfounded. The new OS is far more than Windows XP with a pretty new face. Many aspects of Vista are substantive improvements: stronger security, better built-in apps, networking enhancements, parental controls, and DirectX 10 



graphics support, to name just a few.




As a whole, Vista feels more evolutionary than revolutionary. That's not all bad; one of Microsoft's strengths has been its commitment to backward compatibility, which continues with Vista.

Vista's real competitor, though, is Windows XP. For many users, XP is good enough. And for all the advances in Vista, it's hard to avoid seeing the things that aren't as good as they could have been.

Nor is Vista bug-free. As I assessed final code, I ran into a variety of small but annoying glitches and found plenty of features that didn't work as seamlessly as I would have liked. I can't shake the feeling that Vista's release was rushed.

So what's our verdict? Vista is good—in some respects very good—but not spectacular. Call it a nice-to-have product rather than a must-have.

If you're buying a new consumer PC this spring, it probably makes sense to get Vista. (For a few contrarian points of view, see "Why Not to Buy Vista".) Soon, there won't be much of a choice; according to Microsoft's support life cycle, retail PC buyers will have only a year after Vista's release to buy Windows XP.

If you've already got a PC running Windows XP smoothly, it's harder to see a reason to upgrade right away. You can wait until you replace your machine, or at least a few months, until Vista's kinks are worked out. (If you're curious to see how well your existing machine will support Vista, try Microsoft's Vista Upgrade Advisor, available at www.windowsvista.com/upgradeadvisor). In the meantime, you can download some of the new software included in Vista, such as Internet Explorer 7, Windows Media Player 11, and a desktop search utility, to enjoy some of the same capabilities you'd get in Vista itself.

For business customers, it makes sense to start evaluating Vista now, particularly since improved deployment, management, and security could lead to significant cost reductions in the long term. But you'll want to be confident about compatibility and support before you make the transition en masse. (See "Vista at Work," for more on features for businesses in Windows Vista Business and Vista Enterprise.)

Let's dive in and take a more detailed look at what Vista has to offer



MacIntosh vs. Windows: Choosing to take a bite of the Apple

By Winn Schwartau

WinTel finally broke my back. Or perhaps it was that last series of inexplicable crashes, dirty reinstalls and similar constant complaints from co-workers and friends. 

WinTel finally broke my back, and I wanted to know why. 

I was a PC bigot and I am still a security guy. 

Having lived on PC [DOS, Win, etc.] for 25 years, I, like so many other people, just assumed [ASSuME] that Macs were toys and PCs were for us grownups. I also assumed that the endless assault upon my 

Mad as Hell archive

Want to read other installments in the series? This series will be updated twice weekly so don't forget to check back, it may benefit your organization.

 


digital being was a God 

Mad as Hell archive

Want to read other installments in the series? This series will be updated twice weekly so don't forget to check back, it may benefit your organization.

 

Given Right of the bad guys and I was just going to have 
to deal with it. I also assumed, without ever looking into it in detail, that desktop/laptop security woes were a ubiquitous reality. 

I was wrong. So I decided to examine the security issues I was facing and see what I could do about them. But there was a lot more than that. 

If PCs are supposed to be for Ma & Pa and the masses, how come I spend so much time making my machine live? How come these blasted useful devices are so much more difficult than a toaster or a microwave or a car? What was the Ma & Pa of the universe doing? 

Then I thought about the security of the desktop -- not from the traditional bits, bytes and patches viewpoint, but from the one in which I was trained: as a systems engineer. Once I began viewing desktop security from that vantage point, things became exquisitely clear. 

I had been wrong all of these years, having been sucked into the popular maelstrom of blinded WinTel acceptance, and all of the security problems that come with choosing that technology for mission critical work. 

The "experiment' I began on April 29 has unexpectedly caused a frenzy of examination of the security aspects of the PC, and I guess a lot of folks are reading about my transitions. 

NOTE: I bought my Macs. Retail. I do not know Steve Jobs. I have no Apple stock. I am not a paid Mac whore. OK? 

In the "Mad as Hell" series, I will be exploring: 
How to make Ma & Pa happy campers again.
Why the fear of computing is slowly being cleansed from my carbon system. 
How I believe we can vastly improve the national security of this country, its critical infrastructures and safe corporate computing. 
How to really make security an enabler versus an inhibitor. 
If I am correct, I believe that by viewing PC security differently, we can save our country tens of billions of dollars every year, and measurably increase productivity within the corporate world while simultaneously reducing costs. 
The "Mad as Hell" series is about security -- period. Do not expect uber-geekinesss. There are plenty of folks who can do that more admirably than me. I am terrifically interested in the Big Security Picture and what we can all do to drastically improve it with minimal pain or cost
.

Mac OS X Leopard vs. Windows Vista: The Final Word

The Mac vs. Windows wiki provides an in-depth comparison between two of the most popular consumer operating systems today: Mac OS X Leopard and Windows Vista (Home Premium and Ultimate). We answer the tough questions such as... 


Which features does one have the other lacks? 
Who provides a more "complete" user experience out of the box? 
How do they stack up against other operating systems such as Linux? 

We recognize that different people have different needs for their computers. What's best for one is not always best for another. We simply present the facts, and let you decide for yourself which operating system is best. 

And since Mac vs. Windows is a full-blown wiki, you can edit the comparisons on this website straight from your web browser. See a feature missing? Spot an error? Sign up as a contributor and help us make this website the most comprehensive and unbiased source of its kind.


What is Mac OS X

The goal of this document is not to trace the history of Mac OS X in great detail, so this section would be brief. A more extensive history of Apple's operating systems is covered in A History of Apple's Operating Systems. 

All of Steve Jobs' operational responsibilities at Apple were "taken away" on May 31, 1985. Soon (within weeks), Jobs had come up with an idea for a startup for which he pulled in five other Apple employees. The idea was to create the perfect research computer (for Universities and research labs). Jobs had earlier met up with Nobel laureate biochemist Paul Berg, who had jumped at Jobs' suggestion of using a computer for various simulations. Although Apple was interested in investing in Jobs' startup, they were outraged (and sued Jobs) when they learnt about the five Apple employees joining Jobs. Apple dropped the suit later after some subsequent mutual agreements. The startup was NeXT Computer, Inc. 

Jobs unveiled the first NeXT Computer (running NEXTSTEP 0.8) on October 12, 1988, in San Francisco, although a mature release of the operating system took another year. The name "NEXTSTEP" has gone through a number of capitalization permutations, so we shall simply use "NEXTSTEP". NEXTSTEP 1.0 shipped on September 18, 1989, over two years later than what Jobs had first predicted and hoped for. NEXTSTEP was based on Mach 2.5 and 4.3BSD, and had an advanced GUI system based on Postscript. It used Objective-C as its native programming language, and included the NeXT Interface Builder. 

In the fall of 1990, the first web browser (offering WYSIWYG browsing and authoring) was created at CERN by Tim Berners-Lee on a NeXT computer. Tim's collaborator, Robert Cailliau, later went on to say that "... Tim's prototype implementation on NeXTStep is made in the space of a few months, thanks to the qualities of the NeXTStep software development system ..." 

NEXTSTEP 2.0 was released exactly a year later on September 18, 1990 (with support for CD-ROMs, color monitors, NFS, on-the-fly spell checking, dynamically loadable device drivers, ...). 2.1 followed on March 25, 1991, and 3.0 in September, 1992. 

In the 1992 NeXTWORLD Expo, NEXTSTEP 486, a version (costing $995) for the PC was announced. Versions 3.1 and 3.2 were released in May and October, 1993, respectively. The last version of NEXTSTEP, 3.3, was released in February, 1995. A bit earlier, in 1994, NeXT and Sun had jointly released specifications for OpenStep, an open platform (comprised of several APIs and frameworks) that anybody could use to create their own implementation of *STEP. NeXT's implementation was named OPENSTEP, the successor to the NEXTSTEP operating system. Three versions of OPENSTEP were ever released: 4.0 (July 22, 1996), 4.1 (December, 1996), and 4.2 (January, 1997). SunOS, HP-UX, and even Windows NT had implementations at a point. The GNUstep Project still exists. Even though *STEP ran on many architectures (multi-architecture "fat binaries" were introduced by NeXT), by 1996, things were not looking good for them, and NeXT was giving more importance to WebObjects, a development tool for the Web. 

Meanwhile, Apple had been desperately seeking to create an operating system that could compete with the onslaught from Microsoft. They actually wanted to beat Windows 95 to market, but failed. Apple suffered a setback when Pink OS, a joint venture between IBM and Apple, was killed in 1995. Apple eventually started work on an advanced operating system codenamed Copland, which was first announced to the public in 1994. The first beta of Copland went out in November, 1995, but a 1996 release (as planned and hoped) did not seem feasible. Soon afterwards, Apple announced that they would start shipping "pieces of Copland technology" beginning with System 7.6. Copland turned out to be a damp squib. 

At this point Apple became interested in buying Be, a company that was becoming popular as the maker of the BeBox, running the BeOS. The deal between Apple's Gil Amelio and Be's Gassée never materialized - it has been often reported that Apple offered $125 million while Be wanted an "outrageous" $200 million plus. The total investment in Be at that time was estimated to be only $20 million! 

Apple then considered Windows NT, Solaris and even Pink OS. Then, Steve Jobs called Amelio, and advised him that Be was not a good fit for Apple's OS roadmap. NeXT contacted Apple to discuss possibilities of licensing OPENSTEP, which, unlike BeOS, had at least been proven in the market. Jobs pitched NeXT technology very strongly to Apple, and asserted that OPENSTEP was many years ahead of its time. All this worked out, and Apple acquired NeXT in February, 1997, for $427 million. Amelio later quipped that "We choose Plan A instead of Plan Be." 

Apple named its upcoming NeXT-based system Rhapsody, while it continued to improve the existing Mac OS, often with technology that was supposed to go into Copland. Rhapsody saw two developer releases, in September, 1997, and May, 1998. 

Jobs became the interim CEO of Apple on September 16, 1997. 

Mac OS X was first mentioned in Apple's OS strategy announcement at the 1998 WWDC. Jobs said that OS X would ship in the fall of 1999, and would inherit from both Mac OS and Rhapsody. Moreover, backward compatibility would be maintained to ease customers into the transition. 

Mac OS X did come out in 1999, as Mac OS X Server 1.0 (March 16, 1999), a developer preview of the desktop version, and as Darwin 0.1. Mac OS X beta was released on September 13, 2000. 

At the time of this writing, Mac OS X has seen four major releases: 10.0 ("Cheetah", March 24, 2001), 10.1 ("Puma", September 29, 2001), 10.2 ("Jaguar", August 13, 2002), and 10.3 ("Panther", October 24, 2003). 

It would be an understatement to say that OS X is derived from NEXTSTEP and OPENSTEP. In many respects, it's not just similar, it's the same. One can think of it as OpenStep 5 or 6, say. This is not a bad thing at all - rather than create an operating system from scratch, Apple tried to do the smart thing, and used what they already had to a great extent. However, the similarities should not mislead you: Mac OS X is evolved enough that what you can do with it is far above and beyond NEXTSTEP/OPENSTEP.

from : http://www.kernelthread.com/mac/osx/history.html

Minggu, 22 Juni 2008

Bridging (networking)

From Wikipedia, the free encyclopedia


technique used in packet-switched computer networks. Unlike routing, bridging makes no assumptions about where in a network a particular address is located. Instead, it depends on broadcasting to locate unknown devices. Once a device has been located, its location is recorded in a routing table where the MAC address is stored alongside its IP Address so as to preclude the need for further broadcasting. This informations are stored in ARP table

The utility of bridging is limited by its dependence on broadcasting, and is thus only used in local area networks. Currently, two different bridging technologies are in widespread use. Transparent bridging predominates in Ethernet networks; while source routing is used in token ring networks. Thus, bridging allows you to connect two different networks seamlessly on the data link layer, e.g. a wireless access point with a wired network switch by using MAC addresses as an addressing system. A bridge and switch are very much alike.

Transparent bridging

Transparent bridging refers to a form of bridging "transparent" to the end systems using it, in the sense that the end systems operate as if the bridge isn't there in the way that matters: bridges segment broadcasts between networks, and only allows specific addresses to pass through the bridge to the other network. It is used primarily in Ethernet networks, where it has been standardized as IEEE 802.1D.

The bridging functions are confined to network bridges which interconnect the network segments. The active parts of the network must form a tree. This can be achieved either by physically building the network as a tree or by using bridges that use the spanning tree protocol to build a loop-free network topology by selectively disabling network broadcast addresses. If one computer on network A sent a broadcast packet (packet with destination mac address FF:FF:FF:FF:FF:FF) to address FF:FF:FF:FF:FF:FF, the bridge would stop this from getting to network B. Note we have 3 addresses: source address and target address of the packet and the address where we send the packet. The mac address FF:FF:FF:FF:FF:FF is the broadcast address for both networks; when a frame is sent to this address, the frame is then resent out on every available port on that specific network segment. This method allows the bridge to only switch frames that have a specific MAC address, that is, one that is not mac FF:FF:FF:FF:FF:FF. When an address is specified and a frame is sent, the bridge automatically switches the frame to both network segments while noting the source MAC addresses' home segment. This allows the bridge to send frames across the networks, by recording and resolving MAC addresses of devices on each side. Next, the bridges monitor all frames traveling on the network, noting the frame's source addresses in a table, and then broadcasting the frame with a specific destination (not broadcast) address to the other networks, effectively rebroadcasting it to every device available on every network segment until the specified destination is found. Without broadcast segmentation, the bridge would get caught in an infinite loop.

Note that both source and destination addresses are used in this algorithm. Source addresses are recorded in entries in the table, while destination addresses are looked up in the table and matched to the proper segment to send the frame to.

As an example, consider two hosts (A and B) and a bridge (C). The bridge has two interfaces, (C1, C2). A is connected to the C1 and B is connected to the C2. Note the physical connection is A - C - B, since C has two ports. A sends a frame to (C), and C records the source MAC address into its table. The bridge now has an address for A in its table, so it forwards it to B by broadcasting it to FF:FF:FF:FF:FF:FF, or every address possible. B, having received a packet from A, now transmits a packet in response. This time, the bridge has A's address in the table, so it records B's address sends it to A's unique MAC address specifically. Two-way communication is now possible between A and B without any further broadcasting. Note, however, that only the bridge along the direct path between A and B possess table entries for B. If a third host (D), on the same side as A sends a frame to B, the bridge simply records the address source, and broadcasts it to B's segment.

Source route bridging

Source route bridging is used primarily on token ring networks, and is standardized in Section 9 of the IEEE 802.2 standard. The spanning tree protocol is not used, the operation of the network bridges is simpler, and much of the bridging functions are performed by the end systems, particularly the sources, giving rise to its name.

A field in the token ring header, the routing information field (RIF), is used to support source-route bridging. Upon sending a packet, a host attaches a RIF to the packet indicating the series of bridges and network segments to be used for delivering the packet to its destination. The bridges merely follow the list given in the RIF - if a given bridge is next in the list, it forwards the packet, otherwise it ignores it.

When a host wishes to send a packet to a destination for the first time, it needs to determine an appropriate RIF. A special type of broadcast packet is used, which instructs the network bridges to append their bridge number and network segment number to each packet as it is forwarded. Loops are avoided by requiring each bridge to ignore packets which already contain its bridge number in the RIF field. At the destination, these broadcast packets are modified to be standard unicast packets and returned to the source along the reverse path listed in the RIF. Thus, for each route discovery packet broadcast, the source receives back a set of packets, one for each possible path through the network to the destination. It is then up to the source to choose one of these paths (probably the shortest one) for further communications with the destination.

Network bridge

From Wikipedia, the free encyclopedia

connects multiple network segments at the data link layer (layer 2) of the OSI model, and the term layer 2 switch is often used interchangeably with bridge. Bridges are similar to repeaters or network hubs, devices that connect network segments at the physical layer, however a bridge works by using bridging where traffic from one network is managed rather than simply rebroadcast to adjacent network segments. In Ethernet networks, the term "bridge" formally means a device that behaves according to the IEEE 802.1D standard—this is most often referred to as a network switch in marketing literature.

Since bridging takes place at the data link layer of the OSI model, a bridge processes the information from each frame of data it receives. In an Ethernet frame, this provides the MAC address of the frame's source and destination. Bridges use two methods to resolve the network segment that a MAC address belongs to.

  • Transparent bridging – This method uses a forwarding database to send frames across network segments. The forwarding database is initially empty and entries in the database are built as the bridge receives frames. If an address entry is not found in the forwarding database, the frame is rebroadcast to all ports of the bridge, forwarding the frame to all segments except the source address. By means of these broadcast frames, the destination network will respond and a route will be created. Along with recording the network segment to which a particular frame is to be sent, bridges may also record a bandwidth metric to avoid looping when multiple paths are available. Devices that have this transparent bridging functionality are also known as adaptive bridges.
  • Source route bridging – With source route bridging two frame types are used in order to find the route to the destination network segment. Single-Route (SR) frames comprise most of the network traffic and have set destinations, while All-Route(AR) frames are used to find routes. Bridges send AR frames by broadcasting on all network branches; each step of the followed route is registered by the bridge performing it. Each frame has a maximum hop count, which is determined to be greater than the diameter of the network graph, and is decremented by each bridge. Frames are dropped when this hop count reaches zero, to avoid indefinite looping of AR frames. The first AR frame which reaches its destination is considered to have followed the best route, and the route can be used for subsequent SR frames; the other AR frames are discarded. This method of locating a destination network can allow for indirect load balancing among multiple bridges connecting two networks. The more a bridge is loaded, the less likely it is to take part in the route finding process for a new destination as it will be slow to forward packets. A new AR packet will find a different route over a less busy path if one exists. This method is very different from transparent bridge usage, where redundant bridges will be inactivated; however, more overhead is introduced to find routes, and space is wasted to store them in frames. A switch with a faster backplane can be just as good for performance, if not for fault tolerance.

Advantages of network bridges

  • Self configuring
  • Primitive bridges are often inexpensive
  • Reduce size of collision domain by microsegmentation in non switched networks
  • Transparent to protocols above the MAC layer
  • Allows the introduction of management - performance information and access control
  • LANs interconnected are separate and physical constraints such as number of stations, repeaters and segment length don't apply

or making

Disadvantages of network bridges

  • Does not limit the scope of broadcasts
  • Does not scale to extremely large networks
  • Buffering introduces store and forward delays - on average traffic destined for bridge will be related to the number of stations on the rest of the LAN
  • Bridging of different MAC protocols introduces errors
  • Because bridges do more than repeaters by viewing MAC addresses, the extra processing makes them slower than repeaters
  • Bridges are more expensive than repeaters

Bridging versus routing

Bridging and Routing are both ways of performing data control, but work through different methods. Bridging takes place at OSI Model Layer 2 (Data-Link Layer) while Routing takes place at the OSI Model Layer 3 (Network Layer). This difference means that a bridge directs frames according to hardware assigned MAC addresses while a router makes its decisions according to arbitrarily assigned IP Addresses. As a result of this, bridges are not concerned with and are unable to distinguish networks while routers can.

When designing a network, you can choose to put multiple segments into one bridged network or to divide it into different networks interconnected by routers. If a host is physically moved from one network area to another in a routed network, it has to get a new IP address; if this system is moved within a bridged network, it doesn't have to reconfigure anything.

Specific uses of the term "bridge"

Documentation on Linux bridging can be found in the Linux networking wiki. Linux bridging allows filtering and routing.

Certain versions of Windows (including XP and Vista) allow for creating a Network Bridge - a network component that aggregates two or more Network Connections and establishes a bridging environment between them. Windows does not support creating more than one network bridge per system.

Filtering Database

To translate between two segments types, a bridge reads a frame's destination MAC address and decides to either forward or filter. If the bridge determines that the destination node is on another segment on the network, it forwards it (retransmits) the packet to that segment. If the destination address belongs to the same segment as the source address, the bridge filters (discards) the frame. As nodes transmit data through the bridge, the bridge establishes a filtering database (also known as a forwarding table) of known MAC addresses and their locations on the network. The bridge uses its filtering database to determine whether a packet should be forwarded or filtered.


History of the Internet

From Wikipedia, the free encyclopedia


Prior to the widespread inter-networking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network, and the prevalent computer networking method was based on the central mainframe method. In the 1960s, computer researchers, Levi C. Finch and Robert W. Taylor pioneered calls for a joined-up global network to address interoperability problems. Concurrently, several research programs began to research principles of networking between separate physical networks, and this led to the development of Packet switching. These included Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock's MIT and UCLA research programs.

This led to the development of several packet switched networking solutions in the late 1960s and 1970s, including ARPANET, and X.25. Additionally, public access and hobbyist networking systems grew in popularity, including UUCP. They were however still disjointed separate networks, served only by limited gateways between networks. This led to the application of packet switching to develop a protocol for inter-networking, where multiple different networks could be joined together into a super-framework of networks. By defining a simple common network system, the Internet protocol suite, the concept of the network could be separated from its physical implementation. This spread of inter-network began to form into the idea of a global inter-network that would be called 'The Internet', and this began to quickly spread as existing networks were converted to become compatible with this. This spread quickly across the advanced telecommunication networks of the western world, and then began to penetrate into the rest of the world as it became the de-facto international standard and global network. However, the disparity of growth led to a digital divide that is still a concern today.

Following commercialisation and introduction of privately run Internet Service Providers in the 1980s, and its expansion into popular use in the 1990s, the Internet has had a drastic impact on culture and commerce. This includes the rise of near instant communication by e-mail, text based discussion forums, the World Wide Web. Investor speculation in new markets provided by these innovations would also lead to the inflation and collapse of the Dot-com bubble, a major market collapse. But despite this, growth of the Internet continued, and still does.

Before the Internet

In the 1950s and early 1960s, prior to the widespread inter-networking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network. Some networks had gateways or bridges between them, but these bridges were often limited or built specifically for a single use. One prevalent computer networking method was based on the central mainframe method, simply allowing its terminals to be connected via long leased lines. This method was used in the 1950s by Project RAND to support researchers such as Herbert Simon, in Pittsburgh, Pennsylvania, when collaborating across the continent with researchers in Sullivan, Illinois, on automated theorem proving and artificial intelligence.

Three terminals and an ARPA

Main articles: RAND and ARPANET

A fundamental pioneer in the call for a global network, J.C.R. Licklider, articulated the ideas in his January 1960 paper, Man-Computer Symbiosis.

"A network of such [computers], connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions."

J.C.R. Licklider, [1]

In October 1962, Licklider was appointed head of the United States Department of Defense's Advanced Research Projects Agency, now known as DARPA, within the information processing office. There he formed an informal group within DARPA to further computer research. As part of the information processing office's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at the University of California, Berkeley and one for the Compatible Time-Sharing System project at the Massachusetts Institute of Technology (MIT). Licklider's identified need for inter-networking would be made obviously evident by the problems this caused.

"For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. [...] I said, it's obvious what to do (But I don't want to do it): If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet."

Robert W. Taylor, co-writer with Licklider of "The Computer as a Communications Device", in an interview with the New York Times, [2]

Packet switching

Main article: Packet switching

At the tip of the inter-networking problem lay the issue of connecting separate physical networks to form one logical network, with much wasted capacity inside the assorted separate networks. During the 1960s, Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock (MIT) developed and implemented packet switching. The notion that the Internet was developed to survive a nuclear attack has its roots in the early theories developed by RAND, but is an urban legend, not supported by any Internet Engineering Task Force or other document. Early networks used for the command and control of nuclear forces were message switched, not packet-switched, although current strategic military networks are, indeed, packet-switching and connectionless. Baran's research had approached packet switching from studies of decentralisation to avoid combat damage compromising the entire network.[3]

Networks that led to the Internet

ARPANET

Main article: ARPANET
Len Kleinrock and the first IMP.[4]

Promoted to the head of the information processing office at DARPA, Robert Taylor intended to realize Licklider's ideas of an interconnected networking system. Bringing in Larry Roberts from MIT, he initiated a project to build such a network. The first ARPANET link was established between the University of California, Los Angeles and the Stanford Research Institute on 22:30 hours on October 29, 1969. By 5 December 1969, a 4-node network was connected by adding the University of Utah and the University of California, Santa Barbara. Building on ideas developed in ALOHAnet, the ARPANET grew rapidly. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days.[5][6]

ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled "Host Software", was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.

International collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25 networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in 1972, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and University College London.

X.25 and public access

Main articles: X.25, Bulletin board system, and FidoNet

Following on from ARPA's research, packet switching network standards were developed by the International Telecommunication Union (ITU) in the form of X.25 and related standards. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976. This standard was based on the concept of virtual circuits.

The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.[7]

Unlike ARPAnet, X.25 was also commonly available for business use. Telenet offered its Telemail electronic mail service, but this was oriented to enterprise use rather than the general email of ARPANET.

The first dial-in public networks used asynchronous TTY terminal protocols to reach a concentrator operated by the public network. Some public networks, such as CompuServe used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. There were also the America Online (AOL) and Prodigy dial in networks and many bulletin board system (BBS) networks such as FidoNet. FidoNet in particular was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.

UUCP

Main articles: UUCP and Usenet

In 1979, two students at Duke University, Tom Truscott and Jim Ellis, came up with the idea of using simple Bourne shell scripts to transfer news and messages on a serial line with nearby University of North Carolina at Chapel Hill. Following public release of the software, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, and ability to use existing leased lines, X.25 links or even ARPANET connections. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984.

Merging the networks and creating the Internet

TCP/IP











Map of the TCP/IP test network in January 1982

With so many different network methods, something was needed to unify them. Robert E. Kahn of DARPA and ARPANET recruited Vinton Cerf of Stanford University to work with him on the problem. By 1973, they had soon worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman, Gerard LeLann and Louis Pouzin (designer of the CYCLADES network) with important work on this design.[8]

At this time, the earliest known use of the term Internet was by Vinton Cerf, who wrote:

Specification of Internet Transmission Control Program.

"Request for Comments No. 675" (Network Working Group, electronic text (1974)[9]

With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn's initial problem. DARPA agreed to fund development of prototype software, and after several years of work, the first somewhat crude demonstration of a gateway between the Packet Radio network in the SF Bay area and the ARPANET was conducted. On November 22, 1977[10] a three network demonstration was conducted including the ARPANET, the Packet Radio Network and the Atlantic Packet Satellite network—all sponsored by DARPA. Stemming from the first specifications of TCP in 1974, TCP/IP emerged in mid-late 1978 in nearly final form. By 1981, the associated standards were published as RFCs 791, 792 and 793 and adopted for use. DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems and then scheduled a migration of all hosts on all of its packet networks to TCP/IP. On 1 January 1983, TCP/IP protocols became the only approved protocol on the ARPANET, replacing the earlier NCP protocol.[11]

ARPANET to Several Federal Wide Area Networks: MILNET, NSI, and NSFNet

Main articles: ARPANET and NSFNet

After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.

The networks based around the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were.

Several other branches of the U.S. government, the National Aeronautics and Space Agency (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in internet research and started development of a successor to ARPANET. In the mid 1980s all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.

More explicitly, NASA developed a TCP/IP based Wide Area Network, NASA Science Network (NSN), in the mid 1980s connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a total integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.

In 1984 NSF developed CSNET exclusively based on TCP/IP. CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. This grew into the NSFNet backbone, established in 1986, and intended to connect and provide access to a number of supercomputing centers established by the NSF.[12]

Transition toward an Internet

The term "Internet" was adopted in the first RFC published on the TCP protocol (RFC 675[13]: Internet Transmission Control Program, December 1974). It was around the time when ARPANET was interlinked with NSFNet, that the term Internet came into more general use,[14] with "an internet" meaning any network using TCP/IP. "The Internet" came to mean a global and large network using TCP/IP. Previously "internet" and "internetwork" had been used interchangeably, and "internet protocol" had been used to refer to other networking systems such as Xerox Network Services.[15]

As interest in wide spread networking grew and new applications for it arrived, the Internet's technologies spread throughout the rest of the world. TCP/IP's network-agnostic approach meant that it was easy to use any existing network infrastructure, such as the IPSS X.25 network, to carry Internet traffic. In 1984, University College London replaced its transatlantic satellite links with TCP/IP over IPSS.

Many sites unable to link directly to the Internet started to create simple gateways to allow transfer of e-mail, at that time the most important application. Sites which only had intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple e-mail peering, such as allowing access to FTP sites via UUCP or e-mail.

TCP/IP becomes worldwide

The first ARPANET connection outside the US was established to NORSAR in Norway in 1973, just ahead of the connection to Great Britain. These links were all converted to TCP/IP in 1982, at the same time as the rest of the Arpanet.

CERN, the European internet, the link to the Pacific and beyond

Between 1984 and 1988 CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PC's and an accelerator control system. CERN continued to operate a limited self-developed system CERNET internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP and the CERN TCP/IP intranets remained isolated from the Internet until 1989.

In 1988 Daniel Karrenberg, from CWI in Amsterdam, visited Ben Segal, CERN's TCP/IP Coordinator, looking for advice about the transition of the European side of the UUCP Usenet network (much of which ran over X.25 links) over to TCP/IP. In 1987, Ben Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks, and in 1989 CERN opened its first external TCP/IP connections.[16] This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out co-ordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.

At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia.

The Internet began to penetrate Asia in the late 1980s. Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNet in 1989. It hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.[17]

Digital divide

Main article: Digital divide

While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place.

Africa

At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications. In 1996 a USAID funded project, the Leland initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Côte d'Ivoire and Benin in 1998.

Africa is building an Internet infrastructure. AfriNIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.[18]

There are a wide range of programs both to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.[19]

Asia and Oceania

The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).[20]

In 1991, the People's Republic of China saw its first TCP/IP college network, Tsinghua University's TUNET. The PRC went on to make its first global Internet connection in 1995, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.[21]

Latin America

As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.

Opening the network to commerce

The interest in commercial use of the Internet became a hotly debated topic. Although commercial use was forbidden, the exact definition of commercial use could be unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNet connections. Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation.

During the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. The first dial-up on the West Coast, was Best Internet[22] - now Verio, opened in 1986. The first dialup ISP in the East was world.std.com, opened in 1989.

This caused controversy amongst university users, who were outraged at the idea of noneducational use of their networks. Eventually, it was the commercial Internet service providers who brought prices low enough that junior colleges and other schools could afford to participate in the new arenas of education and research.

By 1990, ARPANET had been overtaken and replaced by newer networking technologies and the project came to a close. In 1994, the NSFNet, now renamed ANSNET (Advanced Networks and Services) and allowing non-profit corporations access, lost its standing as the backbone of the Internet. Both government institutions and competing commercial providers created their own backbones and interconnections. Regional network access points (NAPs) became the primary interconnections between the many networks and the final commercial restrictions ended.

IETF and a standard for standards

Main article: IETF

The Internet has developed a significant subculture dedicated to the idea that the Internet is not owned or controlled by any one person, company, group, or organization. Nevertheless, some standardization and control is necessary for the system to function.

The liberal Request for Comments (RFC) publication procedure engendered confusion about the Internet standardization process, and led to more formalization of official accepted standards. The IETF started in January of 1985 as a quarterly meeting of U.S. government funded researchers. Representatives from non-government vendors were invited starting with the fourth IETF meeting in October of that year.

Acceptance of an RFC by the RFC Editor for publication does not automatically make the RFC into a standard. It may be recognized as such by the IETF only after experimentation, use, and acceptance have proved it to be worthy of that designation. Official standards are numbered with a prefix "STD" and a number, similar to the RFC naming style. However, even after becoming a standard, most are still commonly referred to by their RFC number.

In 1992, the Internet Society, a professional membership society, was formed and the IETF was transferred to operation under it as an independent international standards body.

NIC, InterNIC, IANA and ICANN

The first central authority to coordinate the operation of the network was the Network Information Centre (NIC) at Stanford Research Institute (SRI) in Menlo Park, California. In 1972, management of these issues was given to the newly created Internet Assigned Numbers Authority (IANA). In addition to his role as the RFC Editor, Jon Postel worked as the manager of IANA until his death in 1998.

As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by Paul Mockapetris. The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract.[23] In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.[24][25]

Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics.[26]

In 1998 both IANA and InterNIC were reorganized under the control of ICANN, a California non-profit corporation contracted by the US Department of Commerce to manage a number of Internet-related tasks. The role of operating the DNS system was privatized and opened up to competition, while the central management of name allocations would be awarded on a contract tender basis.

Use and culture

E-mail and Usenet

Main articles: e-mail and Usenet

E-mail is often called the killer application of the Internet. However, it actually predates the Internet and was a crucial tool in creating it. E-mail started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is unclear, among the first systems to have such a facility were SDC's Q32 and MIT's CTSS.[27]

The ARPANET computer network made a large contribution to the evolution of e-mail. There is one report[28] indicating experimental inter-system e-mail transfers on it shortly after ARPANET's creation. In 1971 Ray Tomlinson created what was to become the standard Internet e-mail address format, using the @ sign to separate user names from host names.[29]

A number of protocols were developed to deliver e-mail among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET e-mail system. E-mail could be passed this way between a number of networks, including ARPANET, BITNET and NSFNet, as well as to hosts connected directly to other sites via UUCP.

In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNet similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).

From gopher to the WWW

As the Internet grew through the 1980s and early 1990s, many people realized the increasing need to be able to find and organize files and information. Projects such as Gopher, WAIS, and the FTP Archive list attempted to create ways to organize distributed data. Unfortunately, these projects fell short in being able to accommodate all the existing data types and in being able to grow without bottlenecks.[citation needed]

One of the most promising user interface paradigms during this period was hypertext. The technology had been inspired by Vannevar Bush's "Memex"[30] and developed through Ted Nelson's research on Project Xanadu and Douglas Engelbart's research on NLS.[31] Many small self-contained hypertext systems had been created before, such as Apple Computer's HyperCard. Gopher became the first commonly-used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way.

In 1989, whilst working at CERN, Tim Berners-Lee invented a network-based implementation of the hypertext concept. By releasing his invention to public use, he ensured the technology would become widespread..[32] One early popular web browser, modeled after HyperCard, was ViolaWWW.

Scholars generally agree,[citation needed] however, that the turning point for the World Wide Web began with the introduction[33] of the Mosaic web browser[34] in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the High-Performance Computing and Communications Initiative, a funding program initiated by then-Senator Al Gore's High Performance Computing and Communication Act of 1991 also known as the Gore Bill .[35] Indeed, Mosaic's graphical interface soon became more popular than Gopher, which at the time was primarily text-based, and the WWW became the preferred interface for accessing the Internet. (Gore's reference to his role in "creating the Internet", however, was ridiculed in his presidential election campaign. See the full article Al Gore and information technology).

Mosaic was eventually superseded in 1994 by Andreessen's Netscape Navigator, which replaced Mosaic as the world's most popular browser. While it held this title for some time, eventually competition from Internet Explorer and a variety of other browsers almost completely displaced it. Another important event held on January 11, 1994, was The Superhighway Summit at UCLA's Royce Hall. This was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications."

24 Hours in Cyberspace, the "the largest one-day online event" (February 8, 1996) up to that date, took place on the then-active website, cyber24.com.[37][38] It was headed by photographer Rick Smolan.[39] A photographic exhibition was unveiled at the Smithsonian Institution's National Museum of American History on 23 January 1997, featuring 70 photos from the project.

Search engines

Even before the World Wide Web, there were search engines that attempted to organize the Internet. The first of these was the Archie search engine from McGill University in 1990, followed in 1991 by WAIS and Gopher. All three of those systems predated the invention of the World Wide Web but all continued to index the Web and the rest of the Internet for several years after the Web appeared. There are still Gopher servers as of 2006, although there are a great many more web servers.

As the Web grew, search engines and Web directories were created to track pages on the Web and allow people to find things. The first full-text Web search engine was WebCrawler in 1994. Before WebCrawler, only Web page titles were searched. Another early search engine, Lycos, was created in 1993 as a university project, and was the first to achieve commercial success. During the late 1990s, both Web directories and Web search engines were popular—Yahoo! (founded 1995) and Altavista (founded 1995) were the respective industry leaders.

By August 2001, the directory model had begun to give way to search engines, tracking the rise of Google (founded 1998), which had developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines.

Database size, which had been a significant marketing feature through the early 2000s, was similarly displaced by emphasis on relevancy ranking, the methods by which search engines attempt to sort the best results first. Relevancy ranking first became a major issue circa 1996, when it became apparent that it was impractical to review full lists of results. Consequently, algorithms for relevancy ranking have continuously improved. Google's PageRank method for ordering the results has received the most press, but all major search engines continually refine their ranking methodologies with a view toward improving the ordering of results. As of 2006, search engine rankings are more important than ever, so much so that an industry has developed ("search engine optimizers", or "SEO") to help web-developers improve their search ranking, and an entire body of case law has developed around matters that affect search engine rankings, such as use of trademarks in metatags. The sale of search rankings by some search engines has also created controversy among librarians and consumer advocates.

Dot-com bubble

Main article: Dot-com bubble

The suddenly low price of reaching millions worldwide, and the possibility of selling to or hearing from those people at the same moment when they were reached, promised to overturn established business dogma in advertising, mail-order sales, customer relationship management, and many more areas. The web was a new killer app—it could bring together unrelated buyers and sellers in seamless and low-cost ways. Visionaries around the world developed new business models, and ran to their nearest venture capitalist. Of course some of the new entrepreneurs were truly talented at business administration, sales, and growth; but the majority were just people with ideas, and didn't manage the capital influx prudently. Additionally, many dot-com business plans were predicated on the assumption that by using the Internet, they would bypass the distribution channels of existing businesses and therefore not have to compete with them; when the established businesses with strong existing brands developed their own Internet presence, these hopes were shattered, and the newcomers were left attempting to break into markets dominated by larger, more established businesses. Many did not have the ability to do so.

The dot-com bubble burst on March 10, 2000, when the technology heavy NASDAQ Composite index peaked at 5048.62 (intra-day peak 5132.52), more than double its value just a year before. By 2001, the bubble's deflation was running full speed. A majority of the dot-coms had ceased trading, after having burnt through their venture capital and IPO capital, often without ever making a profit.

Worldwide Online Population Forecast

In its "Worldwide Online Population Forecast, 2006 to 2011," JupiterResearch anticipates that a 38 percent increase in the number of people with online access will mean that, by 2011, 22 percent of the Earth's population will surf the Internet regularly.

JupiterResearch says the worldwide online population will increase at a compound annual growth rate of 6.6 percent during the next five years, far outpacing the 1.1 percent compound annual growth rate for the planet's population as a whole. The report says 1.1 billion people currently enjoy regular access to the Web.

North America will remain on top in terms of the number of people with online access. According to JupiterResearch, online penetration rates on the continent will increase from the current 70 percent of the overall North American population to 76 percent by 2011. However, Internet adoption has "matured," and its adoption pace has slowed, in more developed countries including the United States, Canada, Japan and much of Western Europe, notes the report.

As the online population of the United States and Canada grows by about only 3 percent, explosive adoption rates in China and India will take place, says JupiterResearch. The report says China should reach an online penetration rate of 17 percent by 2011 and India should hit 7 percent during the same time frame. This growth is directly related to infrastructure development and increased consumer purchasing power, notes JupiterResearch.

By 2011, Asians will make up about 42 percent of the world's population with regular Internet access, 5 percent more than today, says the study.

Penetration levels similar to North America's are found in Scandinavia and bigger Western European nations such as the United Kingdom and Germany, but JupiterResearch says that a number of Central European countries "are relative Internet laggards."

Brazil "with its soaring economy," is predicted by JupiterResearch to experience a 9 percent compound annual growth rate, the fastest in Latin America, but China and India are likely to do the most to boost the world's online penetration in the near future.

For the study, JupiterResearch defined "online users" as people who regularly access the Internet by "dedicated Internet access" devices. Those devices do not include cell phones.

Historiography

Some concerns have been raised over the historiography of the Internet's development. This is due to lack of centralised documentation for much of the early developments that led to the Internet.

"The Arpanet period is somewhat well documented because the corporation in charge - BBN - left a physical record. Moving into the NSFNET era, it became an extraordinarily decentralised process. The record exists in people's basements, in closets. [...] So much of what happened was done verbally and on the basis of individual trust."

Doug Gale,