Private Cloud Great Abstraction Layer

In the early days of the computer, having once assembled systems by hand, people spent endless days debugging problems and trying to reproduce intermittent errors to produce a stable environment to work with.

Fast-forward 30 years. People still assemble systems by hand, and they still spend endless hours debugging complex problems.

So, does that mean that nothing has changed? No, the fundamental drive of human endeavor to do more still persists, but look what you can do with even the most basic computer in 2013 compared to the 1980s.

In the '80s, your sole interface to the computing power of your system was a keyboard and an arcane command line. The only way you could exchange information with your peers was by manually copying files to a large-format floppy disk and physically exchanging that media.

Thirty years later, our general usage patterns of computers haven't changed that significantly: We use them to input, store, and process information. What have changed significantly are the massive number of sources and formats of input and output methods—the USB flash drive, portable hard disks, the Internet, email, FTP, and BitTorrent, among others. As a result, there is a massive increase in the expectations of users about what they will be able to do with a computer system.

This increase in expectation has only been possible because each innovation in computing has leap-frogged from the previous one, often driven by pioneering vendors (or left-field inventors) rather than by agreed standards. However, as those pioneering technologies established a market share, their innovations became standards (officially or not) and were iterated by future innovators adding functionality and further compatibility. In essence, abstraction is making it easier for the next innovator to get further, faster, and cheaper.

The history of computing (and indeed human development) has been possible because each new generation of technology has stood on the shoulders of its predecessors. In practical terms, this has been possible because of an ongoing abstraction of complexity. This abstraction has also made it feasible to replace or change the underlying processes, configurations, or even hardware without significantly impacting applications that rely on it. This abstraction eventually became known as an application programming interface (API)—an agreed demarcation point between various components of a system.

Here's an example. Ask the typical 2013 graduate Java developer how a hard disk determines which files, sectors, and tracks to read a file from over a small computer system interface (SCSI) interface. You'll probably get a shrug and "I just call java.io.FileReader and it does its thing." That's because frankly, they don't care. And they don't care because they don't need to. A diligent design and engineering team has provided them with an API call that masks the underlying complexities of talking to a physical disk—reading 1s and 0s from a magnetic track, decoding them and turning them into usable data, correcting any random errors, and ensuring that any errors are handled gracefully (most of the time). That same application is ignorant of whether the file is stored on a SCSI or a SATA (Serial Advanced Technology Attachment) disk, or even over a network connection, because it is abstracted.

If you map this out, your Java application follows steps similar to these through the stack:

1) The Java developer creates the user code.

2) The developer runs the Java function java.io.FileReader.

3) The framework converts the Java function into the appropriate operating system (OS) API call for the OS the code is running on.

4) The operating system receives the API call.

5) The operating system job scheduler creates a job to accomplish the request.

6) The kernel dispatches the job to the filesystem driver.

7) The filesystem driver creates pointers, determines metadata, and builds a stream of file content data.

8) The disk subsystem driver packages file data into a sequence of SCSI bus commands and makes the hardware register manipulations and CPU interrupts.

9) The disk firmware responds to commands and receives data issued over a bus.

10) The disk firmware calculates the appropriate physical disk platter location.

11) The disk firmware manipulates the voltage to microprocessors, motors, and sensors over command wires to move physical disk heads into a predetermined position.

12) The disk microprocessor executes a predetermined pattern to manipulate an electrical pulse on the disk head, and then reads back the resulting magnetic pattern.

13) The disk microprocessor compares the predetermined pattern against what has been read back from the disk platter.

14) Assuming all that is okay, a Successful command is sent back up the stack.

Phew! Most techies or people who have done some low-level coding can probably follow most of the steps down to the microcontroller level before it turns into pure electrical engineering. But most techies can't master all of this stack, and if they try, they'll spend so long mastering it that they won't get any further than some incomplete Java code that can only read a single file but is implemented as an instance of code that doesn't exist outside of a single machine—let alone become the next social networking sensation or solve the meaning of life.

Abstraction allows people to focus less on the nuts and bolts of building absolutely everything from raw materials and get on with doing useful "stuff" using basic building blocks.

Imagine you are building an application that will run across more than one server. You must deal with the complexities of maintaining state of user sessions between multiple servers and applications. If you want to scale this application across multiple datacenters and even continents, doing so adds another layer of complexity related to concurrency, load, latency, and other factors.

So if you have to build and intimately understand the processing from your users’ browser down to the microcontroller on an individual disk, or the conversion of optical or voltage pulses to data flows down cables and telecommunications carrier equipment across the world to move data between your servers, you have a lot of work cut out for you.

Source of Information : VMware Private Cloud Computing with vCloud Director

No comments:

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...