Ads

Friday, 12 January 2018

Wednesday, 3 May 2017

Li-Fi Technology

What is Li-Fi?
How does Li-Fi work?
Wi-Fi vs Li-Fi | The ultimate definition of Li-Fi | Li-Fi news


Li-Fi claims to be 100 times faster than standard Wi-Fi. But what exactly is it and how does it work?

wifi istock themacx


What is Li-Fi?

Light Fidelity or Li-Fi is a Visible Light Communications (VLC) system running wireless communications travelling at very high speeds.
Li-Fi uses common household LED (light emitting diodes) lightbulbs to enable data transfer, boasting speeds of up to 224 gigabits per second.
The term Li-Fi was coined by University of Edinburgh Professor Harald Haas during a TED Talk in 2011. Haas envisioned light bulbs that could act as wireless routers.
Subsequently, in 2012 after four years of research, Haas set up company pureLiFi with the aim 'to be the world leader in Visible Light Communications technology'.

How it works

Li-Fi and Wi-Fi are quite similar as both transmit Data electromagnetically. However, Wi-Fi uses radio waves while Li-Fi runs on visible light.
As we now know, Li-Fi is a Visible Light Communications (VLC) system. This means that it accommodates a photo-detector to receive light signals and a signal processing element to convert the data into 'stream-able' content.
An LED lightbulb is a semi-conductor light source meaning that the constant current of electricity supplied to an LED lightbulb can be dipped and dimmed, up and down at extremely high speeds, without being visible to the human eye.
For example, data is fed into an LED light bulb (with signal processing technology), it then sends data (embedded in its beam) at rapid speeds to the photo-detector (photodiode).
The tiny changes in the rapid dimming of LED bulbs is then converted by the 'receiver' into electrical signal.
The signal is then converted back into a binary data stream that we would recognise as web, video and audio applications that run on internet enables devices.

Li-Fi vs Wi-Fi

While some may think that Li-Fi with its 224 gigabits per second leaves WiFi in the dust, Li-Fi's exclusive use of visible light could halt a mass uptake. 
Li-Fi signals cannot pass through walls, so in order to enjoy full connectivity, capable LED bulbs will need to be placed throughout the Home. Not to mention, Li-Fi requires the lightbulb is on at all times to provide connectivity, meaning that the lights will need to be on during the day.
What's more, where there is a lack of light bulbs, there is a lack of Li-Fi internet so Li-Fi does take a hit when it comes to public Wi-Fi networks.
In an announcement yesterday, an extension of standard Wi-Fi is coming and it's called Wi-Fi HaLow.
This new project claims to double the range of connectivity while using less power. Due to this, Wi-Fi HaLow is reportedly perfect for battery powered devices such as smartwatches, smartphones and lends itself to Internet of Things devices such as sensors and smart applications. 
But it's not all doom and gloom! Due to its impressive speeds, Li-Fi could make a huge impact on the internet of things too, with data transferred at much higher levels with even more devices able to connect to one another. 
What's more, due to its shorter range, Li-Fi is more secure than Wi-Fi and it's reported that embedded light beams reflected off a surface could still achieve 70 megabits per second.
© pureLiFi


The future of Li-Fi

In November 2014, Li-Fi pioneers pureLiFi joined forces with French lighting company Lucibel aiming to bring out Li-Fi enables products, by the end of 2015.
pureLiFi already have two products on the market: Li-Flame Ceiling Unit to connect to an LED light fixture and Li-Flame Desk Unit which connects to a device via USB, both aiming to provide light and connectivity in one device. 
Plus, with faster connectivity and data transmission it’s an interesting space for businesses. The integration of internet of things devices and Li-Fi will provide a wealth of opportunities for retailers and other businesses alike. For example, shop owners could transmit data to multiple customers' phones quickly, securely and remotely. 
Li-Fi is reportedly being tested in Dubai, by UAE-based telecommunications provider, du and Zero1. Du claims to have successfully provided internet, audio and video streaming over a Li-Fi connection.
What's more, reports suggest that Apple may build future iPhones with Li-Fi capabilities. A Twitter user found that within its iOS 9.1 code there were references to Li-Fi written as 'LiFiCapability' hinting that Apple may integrate Li-fi with iPhones in the future. 
Whether or not Li-Fi will live up to its hype is yet to be decided.

What’s the difference between a hub, a switch, and a router?

"Hubs, switches, and routers are all computer networking devices with varying capabilities. Unfortunately, the terms are also often misused"
Hubs, switches, and routers are all devices that let you connect one or more computers to other computers, networked devices, or even other networks. Each has two or more connectors called ports into which you plug in the cables to make the connection. Varying degrees of magic happen inside the device and therein lies the difference. I often see the terms misused, so let’s clarify what each one really means. 

Hubs

hub is typically the least expensive, least intelligent, and least complicated of the three. Its job is very simple – anything that comes in one port is sent out to the others.
That’s it.
If a message1 comes in for computer “A”, that message is sent out all the other ports, regardless of which one computer “A” is on:
Message coming into a hub
And when computer “A” responds, its response also goes out to every other port on the hub:
Response being sent through a hub
Every computer connected to the hub “sees” everything that every other computer on the hub sees. The computers themselves decide if they are the targeted recipient of the message and when a message should be paid attention to or not.
The hub itself is blissfully ignorant of the data being transmitted. For years, simple hubs have been quick and easy ways to connect computers in small networks.

Switches

switch does essentially what a hub does, but more efficiently. By paying attention to the traffic that comes across it, it can “learn” where particular addresses are.
Initially, a switch knows nothing and simply sends on incoming messages to all ports:
The initial contact through a switch
Even accepting that first message, however, the switch has learned something – it knows on which connection the sender of the message is located. Thus, when machine “A” responds to the message, the switches only need to send that message out to the one connection:
Response being processed through a switch

In addition to sending the response through to the originator, the switch has now learned something else – it now knows on which connection machine “A” is located.
That means that subsequent messages destined for machine “A” need only be sent to that one port:
Switch sending an incoming message to the machine who's location it is aware of.
Switches learn the location of the devices that they are connected to almost instantaneously. The net result is that most network traffic only goes where it needs to rather than to every port. On busy networks, this can make the network significantly faster.


Routers

router is the smartest and most complicated of the bunch. Routers come in all shapes and sizes – from the small, four-port broadband routers that are very popular right now to the large industrial strength devices that drive the internet itself.
A simple way to think of a router is as a computer that can be programmed to understand, possibly manipulate, and route the data that it’s being asked to handle. Many routers today are, in fact, little computers dedicated to the task of routing network traffic.
As far as simple traffic routing is concerned, a router operates exactly as a switch, learning the location of the computers on its connections and routing traffic only to those computers.
Consumer grade routers perform at minimum two additional and important
tasks: DHCP and NAT.


DHCP – Dynamic Host Configuration Protocol – is the way dynamic IP addresses are assigned. A device asks for an IP address to be assigned to it from “upstream” and a DHCP server responds with an IP address assignment. A router connected to your ISP-provided internet connection will typically ask your ISP’s server for an IP address; this will be your IP address on the internet. Your local computers, on the other hand, will ask the router for an IP address and these addresses are local to your network.
Router reciving an IP address from ISP, and itself handing out IP addresses to local computers
NAT – Network Address Translation – is the way that the router translates the IP addresses of packets that cross the internet/local network boundary. When computer “A” sends a packet out, the IP address that it’s “from” is that of computer “A” – 192.168.1.2 in the example above. When the router passes that on to the internet, it replaces the local IP address with the internet IP address assigned by the ISP. It also keeps track, so that if a response comes back from somewhere on the internet, the router knows to do the translation in reverse – replace the internet IP address with the local IP address for machine “A” and then send that response packet on to machine “A”.
A side effect of NAT is that machines on the internet cannot initiate communications to local machines – they can only respond to communications initiated by those local machines.
The net effect is that the router then also acts as a firewall:
Router acting as a firewall

A note about speed

A quick note on one other thing that you’ll often see mentioned with these devices and that’s network speed. Most devices now are capable of both 10mbps (10 mega-bits, or million bits, per second) as well as 100mbps and will automatically detect the speed.
More and more devices are now capable of handling 1000mbps or a billion bits per second (1gpbs).
Similarly, many devices are now also wireless transmitters that simply act like additional ports on the device.

Tuesday, 3 July 2012

Characteristic of a Computer?

Basic characteristics about computer are:

1. Speed: - As you know computer can work very fast. It takes only few seconds for calculations that we take hours to complete. You will be surprised to know that computer can perform millions (1,000,000) of instructions and even more per second.
Therefore, we determine the speed of computer in terms of microsecond (10-6 part of a second) or nanosecond (10 to the power -9 part of a second). From this you can imagine how fast your computer performs work.
2. Accuracy: - The degree of accuracy of computer is very high and every calculation is performed with the same accuracy. The accuracy level is 7
determined on the basis of design of computer. The errors in computer are due to human and inaccurate data.
3. Diligence: - A computer is free from tiredness, lack of concentration, fatigue, etc. It can work for hours without creating any error. If millions of calculations are to be performed, a computer will perform every calculation with the same accuracy. Due to this capability it overpowers human being in routine type of work.
4. Versatility: - It means the capacity to perform completely different type of work. You may use your computer to prepare payroll slips. Next moment you may use it forinventory management or to prepare electric bills.
5. Power of Remembering: - Computer has the power of storing any amount of information or data. Any information can be stored and recalled as long as you require it, for any numbers of years. It depends entirely upon you how much data you want to store in a computer and when to lose or retrieve these data.
6. No IQ: - Computer is a dumb machine and it cannot do anywork without instruction from the user. It performs the instructions at tremendous speed and with accuracy. It is you to decide what you want to do and in what sequence. So a computer cannot take its own decision as you can.
7. No Feeling: - It does not have feelings or emotion, taste, knowledge and experience. Thus it does not get tired even after long hours of work. It does not distinguish between users.
8. Storage: - The Computer has an in-built memory where it can store a large amount of data. You can also store data insecondary storage devices such as floppies, which can be kept outside your computer and can be carried to other computers

Generations of Computer



Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices.

The various generations of computers an listed below :


(i)  First Generation (1946-1954): In 1946 there was no 'best' way of storing instructions and data in a computer memory. There were four competing technologies for providing computer memory: electrostatic storage tubes, acoustic delay lines (mercury or nickel), magnetic drums (and disks?), and magnetic core storage.

The digital computes using electronic valves (Vacuum tubes) are known as first generation computers. the first 'computer' to use electronic valves (ie. vacuum tubes). The high cost of vacuum tubes prevented their use for main memory. They stored information in the form of propagating sound waves.

The vacuum tube consumes a lot of power. The Vacuum tube was developed by Lee DeForest in 1908. These computers were large in size and writing programs on them was difficult. Some of the computers of this generation were:


Mark I : The IBM Automatic Sequence Controlled Calculator (ASCC), called the Mark I by Harvard University, was an electro-mechanical computer. Mark I is the first machine to successfully perform a long services of arithmetic and logical operation. Mark I is the First Generation Computer. it was the first operating machine that could execute long computations automaticallyMark I computer which was built as a partnership between Harvard and IBM in 1944. This was the first programmable digital computer made in the U.S. But it was not a purely electronic computer. Instead the Mark I was constructed out of switches, relays, rotating shafts, and clutches. The machine weighed 5 tons, incorporated 500 miles of wire, was 8 feet tall and 51 feet long, and had a 50 ft rotating shaft running its length, turned by a 5 horsepower electric motor.


ENIAC: It  was the first general-purpose electronic computer built in 1946 atUniversity of Pennsylvania, USA by John Mauchly and J. Presper Eckert. The completed machine was announced to the public the evening of February 14, 1946. It was named Electronic Numerical Integrator and Calculator (ENIAC). ENIAC contained 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed more than 30 short tons (27 t), was roughly 8 by 3 by 100 feet (2.4 m × 0.9 m × 30 m), took up 1800 square feet (167 m2), and consumed 150 kW of power. Input was possible from anIBM card reader, and an IBM card punch was used for output. These cards could be used to produce printed output offline using an IBM accounting machine, such as theIBM 405. Today your favorite computer is many times as powerful as ENIAC, still size is very small.

EDVAC: It stands for Electronic Discrete Variable Automatic Computer and was developed in 1950.it was to be a vast improvement upon ENIAC, it was binary rather than decimal, and was a stored program computer. The concept of storing data and instructions inside the computer was introduced here. This allowed much faster operation since the computer had rapid access to both data and instructions. The other advantage of storing instruction was that computer could do logical decision internally.

The EDVAC was a binary serial computer with automatic addition, subtraction, multiplication, programmed division and automatic checking with an ultrasonic serial memory. EDVAC's addition time was 864 microseconds and its multiplication time was 2900 microseconds (2.9 milliseconds).

The computer had almost 6,000 vacuum tubes and 12,000 diodes, and consumed 56 kW of power. It covered 490 ft² (45.5 m²) of floor space and weighed 17,300 lb (7,850 kg).


EDSAC: It stands for Electronic Delay Storage Automatic Computer and was developed by M.V. Wilkes at Cambridge University in 1949. Two groups of individuals were working at the same time to develop the first stored-program computer. In the United States, at the University of Pennsylvania the EDVAC (Electronic Discrete Variable Automatic Computer) was being worked on. In England at Cambridge, the EDSAC (Electronic Delay Storage Automatic Computer) was also being developed. The EDSAC won the race as the first stored-program computer beating the United States’ EDVAC by two months. The EDSAC performed computations in the three millisecond range. It performed arithmetic and logical operations without human intervention. The key to the success was in the stored instructions which it depended upon solely for its operation. This machine marked the beginning of the computer age. EDSAC is the first computer is used to store a program



UNIVAC-1Ecker and Mauchly produced it in 1951 by Universal Accounting Computer setup. it was the first commercial computer produced in the United States. It was designed principally by J. Presper Eckert and John Mauchly, the inventors of the ENIAC.

The machine was 25 feet by 50 feet in length, contained 5,600 tubes, 18,000 crystal diodes, and 300 relays. It utilized serial circuitry, 2.25 MHz bit rate, and had an internal storage capacity 1,000 words or 12,000 characters.

It utilized a Mercury delay line, magnetic tape, and typewriter output. The UNIVAC was used for general purpose computing with large amounts of input and output.

Power consumption was about 120 kva. Its reported processing speed was 0.525 milliseconds for arithmetic functions, 2.15 milliseconds for multiplication and 3.9 Milliseconds for division.

The UNIVAC was also the first computer to come equipped with a magnetic tape unit and was the first computer to use buffer memory.

Other Important Computers of First Generation

Some other computers of this time worth mentioning are the Whirlwind, developed at Massachussets Institute of Technology, and JOHNNIAC, by the Rand Corporation. The Whirlwind was the first computer to display real time video and use core memory. The JOHNNIAC was named in honor of Jon Von Neumann. Computers at this time were usually kept in special locations like government and university research labs or military compounds.


Limitations of First Generation Computer

Followings are the major drawbacks of First generation computers.


1.  They used valves or vacuum tubes as their main electronic component.

2. They were large in size, slow in processing and had less storage capacity.

3.  They consumed lots of electricity and produced lots of heat.

4.  Their computing capabilities were limited.

5. They were not so accurate and reliable.

6.  They used machine level language for programming.

7.  They were very expensive.

Example: ENIAC, UNIVAC, IBM 650 etc


(ii)   Second Generation (1955-1964): The second-generation computer used transistors for CPU components & ferrite cores for main memory & magnetic disks for secondary memory. They used high-level languages such as FORTRAN (1956), ALGOL (1960) & COBOL (1960 - 1961). I/O processor was included to control I/O operations.

Around 1955 a device called Transistor replaced the bulky Vacuum tubes in the first generation computer. Transistors are smaller than Vacuum tubes and have higher operating speed. They have no filament and require no heating. Manufacturing cost was also very low. Thus the size of the computer got reduced considerably.

It is in the second generation that the concept of Central Processing Unit (CPU), memory, programming language and input and output units were developed. The programming languages such as COBOL, FORTRAN were developed during this period. Some of the computers of the Second Generationwere                                                                         



1. IBM 1620: Its size was smaller as compared to First Generation computers and mostly used for scientific purpose.



2. IBM 1401: Its size was small to medium and used for business applications.


3. CDC 3600: Its size was large and is used for scientific purposes.

Features:

1.  Transistors were used instead of Vacuum Tube.

2.  Processing speed is faster than First Generation Computers (Micro Second)

3.  Smaller in Size (51 square feet)

4. The input and output devices were faster.

Example: IBM 1400 and 7000 Series, Control Data 3600 etc.


(iii)   Third Generation (1964-1977): By the development of a small chip consisting of the capacity of the 300 transistors. These ICs are popularly known as Chips. A single IC has many transistors, registers and capacitors built on a single thin slice of silicon. So it is quite obvious that the size of the computer got further reduced. Some of the computers developed during this period were IBM-360, ICL-1900, IBM-370, and VAX-750. Higher level language such as BASIC (Beginners All purpose Symbolic Instruction Code) was developed during this period.  Computers of this generation were small in size, low cost, large memory and processing speed is very high. Very soon ICs Were replaced by LSI (Large Scale Integration), which consisted about 100 components. An IC containing about 100 components is called LSI.

Features:

1. They used Integrated Circuit (IC) chips in place of the transistors.

2. Semi conductor memory devices were used.

3.  The size was greatly reduced, the speed of processing was high, they were   more accurate and reliable.

4.  Large Scale Integration (LSI) and Very Large Scale Integration (VLSI) were also developed.

5.  The mini computers were introduced in this generation.

6. They used high level language for programming.

Example: IBM 360, IBM 370 etc.


(iv)     Fourth Generation : An IC containing about 100 components is called LSI (Large Scale Integration) and the one, which has more than 1000 such components, is called as VLSI (Very Large Scale Integration). It uses large scale Integrated Circuits (LSIC) built on a single silicon chip called microprocessors. Due to the development of microprocessor it is possible to place computer’s central processing unit (CPU) on single chip. These computers are called microcomputers. Later very large scale Integrated Circuits (VLSIC) replaced LSICs. Thus the computer which was occupying a very large room in earlier days can now be placed on a table. The personal computer (PC) that you see in your school is a Fourth Generation Computer Main memory used fast semiconductors chips up to 4 M bits size. Hard disks were used as secondary memory. Keyboards, dot matrix printers etc. were developed. OS-such as MS-DOS, UNIX, Apple’s Macintosh were available. Object oriented language, C++ etc were developed.

Features:

1.  They used Microprocessor (VLSI) as their main switching element.

2. They are also called as micro computers or personal computers.

3.  Their size varies from desktop to laptop or palmtop.

4.  They have very high speed of processing; they are 100% accurate, reliable,   diligent and versatile.

5.  They have very large storage capacity.

Example: IBM PC, Apple-Macintosh etc.


(v)    Fifth Generation (1991- continued) : 5th generation computers use ULSI (Ultra-Large Scale Integration) chips. Millions of transistors are placed in a single IC in ULSI chips. 64 bit microprocessors have been developed during this period. Data flow & EPIC architecture of these processors have been developed. RISC & CISC, both types of designs are used in modern processors. Memory chips and flash memory up to 1 GB, hard disks up to 600 GB & optical disks up to 50 GB have been developed. fifth generation digital computer will be Artificial intelligence.

google-site-verification: googleda4434fd95ffebf5.html