feedburner
Enter your email address:

Delivered by FeedBurner


Windows Server 2012 R2 Editions

Labels:

When Windows Server 2012 was released, you had the choice between Standard and Datacenter editions in both the Server Core and GUI versions. With the release of Windows Server 2012 R2, you have two more editions to choose from: Foundation and Essentials. Not only does each version have different features, but the price for each license reflects each version's features. Let's discuss the differences among all the editions.


Standard Edition
This is the enterprise-class cloud server and is the flagship OS. This chapter will cover in detail the changes affecting the Standard edition, because this is the most popular choice. This server is feature rich and will handle just about all your general networking needs. This server can be used for multipurpose or individual roles. It can be stripped down to its core for an even more secure and better-performing workhorse.


Datacenter Edition
This is Microsoft's "heavy-duty" virtualization server version. This is best used in highly virtualized environments because it sports unlimited virtual instance rights. That's right, I said unlimited! This is really the only difference between Datacenter and Standard, and of course this is reflected in the price; Datacenter costs about four times as much as Standard edition.


Foundation Edition
Foundation contains most core features found in the other editions, but there are some important limitations you should understand before you deploy it. Active Directory certificate service roles are limited to only certificate authorities. Here are some other limitations:

  • The maximum number of users is 15.
  • The maximum number of Server Message Block (SMB) connections is 30.
  • The maximum number of Routing and Remote Access (RRAS) connections is 50.
  • The maximum number of Internet Authentication Service (IAS) connections is 10.
  • The maximum number of Remote Desktop Services (RDS) Gateway connections is 50.
  • Only one CPU socket is allowed.
  • It cannot host virtual machines or be used as a guest virtual machine.


Essentials Edition
This server is intended for very small companies with fewer than 25 users and 50 devices. This is a very cost-effective way to provide small business networking. Here are some but not all new features of Windows Server 2012 R2 Essentials:

  • Improved client deployment
  • Can be installed as virtual machine or on a server
  • User group management
  • Improved file history
  • Includes BranchCache
  • Uses the dashboard to manage mobile devices
  • Includes System Restore

Internet Wiretapping and Carnivore

Labels:

In the absence of encryption export controls and key-escrow systems, there are widespread fears that the U.S. government will turn to extraordinary surveillance measures in order to obtain information about criminal suspects. This brings us to the issue of Internet wiretaps and the deployment of other surveillance techniques by law-enforcement officials. To what extent should the Internet infrastructure allow or support these electronic surveillance architectures? Are these architectures susceptible to any unacceptable risks? How can we balance privacy rights with the need to monitor some digital communications in order to combat cybercrime, computer-related crime, and cyberterrorism? In order to answer these questions some historical perspective is essential.

The legality of wiretaps has a long and convoluted history in the United States. The first-known telephone wiretaps can be traced back to 1890. Since that time, telephone wiretapping has become a favorite tool of law-enforcement authorities. The Internet creates new threats and problems for law-enforcement officials. In a world where crime, like all other activities, is facilitated by digital technology, the ability to tap Internet communications seems indispensable.

Telephone or Internet wiretapping cannot be indiscriminate or undertaken on a whim by local police or federal authorities. The relevant legal principle is embodied in the Fourth Amendment, which protects citizens against unwarranted searches and seizure. This Amendment stipulates, "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated." While the Fourth Amendment is a simple and hallowed right, its application to wiretapping is quite complex. In Olmstead v. the United States (277 U.S. 438 [1928]) the U.S. Supreme Court held that wiretaps did not violate the Fourth Amendment, since they did not amount to a physical search or a seizure of any property. But in Katz v. the United States (389 U.S. 347 [1967]) and Berger v. New York (388 U.S. 57 [1967]), this controversial decision was overturned. In the former case, Charles Katz was convicted of illegal gambling in a federal court based on evidence collected through a tap on his phone. He appealed, and ultimately the Supreme Court ruled that the evidence based on the wiretapped conversations was inadmissible; on that basis they threw out the conviction. This Supreme Court, unlike the Court that decided the Olmstead case, regarded electronic surveillance as the equivalent of search and seizure, so it was covered by the Fourth Amendment. According to the majority opinion in the Katz ruling, "The Fourth Amendment protects people, not places. What a person knowingly exposes to the public, even in his own home or office, is not a subject of Fourth Amendment protection. But what he seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected."

Since the Katz decision it has been necessary to get a court-issued warrant in order to conduct a wiretap, and that wiretap must generally be of short duration and must be narrowly focused. Congress affirmed these rulings by adopting Title III of the Omnibus Crime Control and Safe Streets Act of 1968. This legislation requires a court order based on probable cause in order to engage in wiretapping.

While the Katz and Berger decisions are regarded as a great advancement for privacy rights, civil libertarians believe that the Supreme Court moved in the reverse direction in the case of United States v. Miller (425 U.S. 435 [ 1976]). In Miller the Court held that personal information provided to a third party loses its Fourth Amendment protection. As a result, if one's credit records are made available to law-enforcement authorities in the course of an investigation, they are no longer entitled to Fourth Amendment protection. Finally, in what some consider as another blow to privacy, in Smith v. Maryland (442 U.S. 750 [1979]) the Supreme Court held that the numbers one dials to make a phone call, data collected with a "pen register" device, are not protected under the auspices of the Constitution. According to Dempsey (2000), "While the Court was careful to limit the scope of its decision, and emphasized subsequently that pen registers collect only a very narrow range of information, the view has grown up that transactional data concerning communications is not constitutionally protected."

In the mid-1980s, as the computer revolution accelerated, Congress attempted to anticipate privacy problems that would surface due to new electronic technologies. In 1986 it passed the Electronic Communications Privacy Act (ECPA). The ECPA clarified that wireless communications were to be given the same protection as wireline telephone communications. It amended the federal wiretap law so that it would now apply to cellular telephones, electronic mail, and pagers. According to Dempsey (1997), "The ECPA made it a crime to knowingly intercept wireless communications and e-mail, but authorized law enforcement to do so with a warrant issued on probable cause." The ECPA also established more precise rules for the deployment of pen registers, which identify the numbers dialed in an outgoing call.

Under this statutory framework, law-enforcement officials have enough latitude to engage in electronic surveillance whenever they deem it necessary. Empirical evidence also supports this supposition: The number of wiretaps has increased steadily from 564 in 1980 to 1,350 in 1999 (Schwartz 2001b). According to Steinhardt (2000), "In the last reporting period, the Clinton Administration conducted more wiretaps in one year than ever in history, and the number of 'roving wiretaps' (wiretaps of any phone a target might use, without specifying a particular phone) nearly doubled."

Civil liberties groups believe that this trend of more wiretaps and increasing numbers of intercepted communications will intensify thanks to the advent of digital communications along with growing concerns about terrorism. There are particular worries about the FBI's new system, called Carnivore, an Internet wiretapping system. Carnivore is a packet sniffer that enables FBI agents working in conjunction with an ISP to intercept data passing to and from a criminal suspect. This monitoring close to the source of data transfers makes it easier to trace messages. The data are copied and then filtered to eliminate whatever information federal investigators are not entitled to examine. For the most part, Carnivore is used to track and log the senders and recipients of e-mail, so it functions primarily as a pen register or a "trap and trace" device (A pen register collects electronic impulses that identify the numbers dialed for outgoing calls and a trap and trace device collects the originating number for incoming calls). The threshold for court approval for such wiretaps is low, since investigators need only demonstrate that the information has relevance for their investigation. According to Schwartz (2001), the FBI believes that Carnivore's value lies in its ability to be less inclusive than predecessor wiretapping technologies: "Agents can fine-tune the system to yield only the sources and recipients of the suspect's e-mail traffic, providing Internet versions of the phone-tapping tools that record the numbers dialed by a suspect and the numbers of those calling in."

Nonetheless, this sort of surreptitious surveillance exemplified by the FBI's Carnivore technology has provoked the ire of civil libertarians. For example, the Electronic Frontier Foundation objects to Carnivore because the use of packet sniffers on the Internet captures more information than the use of pen registers and trap and trace devices used for traditional telephone wiretapping. In Internet communications the contents of messages and sender-recipient header data are not separate. According to the EFF (2000), even though Carnivore will be filtering out unwanted e-mail and other communications information, "The Carnivore system appears to exacerbate the over collection of personal information by collecting more information than it is legally entitled to collect under traditional pen register and trap and trace laws."

In response to these criticisms, the FBI explains that it relies on a complex and finely tuned filtering system that selects messages based on criteria expressly set out in the court order. Thus, it will not intercept all e-mail messages, but only those transmitted to and from a particular account. If, for example, Joe is a Carnivore target who e-mails three companions, Mike, Nancy, and George, and the FBI is interested only in his communications with George, the communication with Mike and Nancy will be filtered out. It appears, however, that those messages that are intercepted do include content as well as the sender and recipient addresses.

Another problem for civil libertarians is the trustworthiness of the FBI. The FBI claims that it will only record e-mail communications to which it is entitled by the court order. But there is no way to ensure their compliance. It has access to a massive stream of communications over an ISP's network, and no one, including the ISP, will be able to verify which information is intercepted. According to the ACLU, this type of surveillance constitutes a repudiation of the Fourth Amendment, which has been based on the premise "that the Executive cannot be trusted with carte blanche authority when it conducts a search" (Steinhardt 2000).

There are indeed good reasons to be unnerved by Carnivore. The initial scope of surveillance—the entire stream of communications of an ISP's clients—is truly unprecedented. The proximity to all of these data near their source (the ISP) and lack of oversight clearly creates the potential for abuse. Moreover, the FBI's poor track record in recent years, such as its inability to detect spying within its own ranks and its failure to hand over all of the evidence in the trial of the Oklahoma City bomber, Timothy McVeigh, have not inspired the public's confidence in its discretion and ability.

Civil liberties groups have also expressed dismay regarding the FBI's heavy-handed approach to the implementation of a law passed in 1994 known as the Communications Assistance for Law Enforcement Act (CALEA). According to this law,

A telecommunications carrier shall ensure that its equipment, facilities, or services ... are capable of expeditiously isolating and enabling the government, pursuant to a court order or other lawful authorization, to intercept, to the exclusion of other communications, all wire and electronic communications carried by the carrier within [its] service area.

The thrust of this regulation is that telephone companies must redesign their telecommunications infrastructure so that law-enforcement officials will continue to have the capability to engage in surveillance and wiretaps. Such redesign might be necessitated by new technologies that could impede interception of communications. One problem with CALEA has been the FBI's peremptory insistence that this legislation mandates that phone companies build in capabilities that exceed traditional interception procedures. For example, they insist that wireless service providers have location tracking capability built into their systems. Clearly, this poses another new threat to privacy. As Dempsey (1997) points out, the FBI will continue to dominate the implementation process and ignore privacy concerns "unless the other government institutions exercise the authority granted them under the statute to promote the counterbalancing values of privacy and innovation."

Nonetheless, the FBI and other federal law-enforcement officials are entrusted with safeguarding national security and public safety and September 11 reminds us of the need for a heightened security consciousness. Hence, there is a need for responsible surveillance and wiretapping on the Internet that respects the delicate balance between order and liberty. If Carnivore is to be retained by the FBI, there must be more public oversight of its various uses. It may be necessary for Congress to raise the standard for the use of pen registers on the Internet, given the difficulty of separating origin and destination addresses from the content of e-mail communications. Also, there is no national reporting requirement for pen register court orders (as there is for wiretaps), and this too must be changed. There needs to be more publicity and accountability about the collection of these data. These and other measures will be essential if the integrity of the Fourth Amendment is to be preserved in the face of technologies like Carnivore.

Finally, the need for convergence is acute. We now have a patchwork approach to surveillance rules, with different standards for telephone, cable, and other communication systems. For example, according to the Cable Act of 1984 there is a high burden for government agencies to meet when requesting permission to monitor computers that use cable modems. The act also requires that the target of the surveillance be provided an opportunity to challenge the request. It would clearly be preferable to have one standard for all of these technologies, and that standard should give judges greater discretion over the process of granting requests for surveillance and wiretaps.

Private Cloud Great Abstraction Layer

Labels:

In the early days of the computer, having once assembled systems by hand, people spent endless days debugging problems and trying to reproduce intermittent errors to produce a stable environment to work with.

Fast-forward 30 years. People still assemble systems by hand, and they still spend endless hours debugging complex problems.

So, does that mean that nothing has changed? No, the fundamental drive of human endeavor to do more still persists, but look what you can do with even the most basic computer in 2013 compared to the 1980s.

In the '80s, your sole interface to the computing power of your system was a keyboard and an arcane command line. The only way you could exchange information with your peers was by manually copying files to a large-format floppy disk and physically exchanging that media.

Thirty years later, our general usage patterns of computers haven't changed that significantly: We use them to input, store, and process information. What have changed significantly are the massive number of sources and formats of input and output methods—the USB flash drive, portable hard disks, the Internet, email, FTP, and BitTorrent, among others. As a result, there is a massive increase in the expectations of users about what they will be able to do with a computer system.

This increase in expectation has only been possible because each innovation in computing has leap-frogged from the previous one, often driven by pioneering vendors (or left-field inventors) rather than by agreed standards. However, as those pioneering technologies established a market share, their innovations became standards (officially or not) and were iterated by future innovators adding functionality and further compatibility. In essence, abstraction is making it easier for the next innovator to get further, faster, and cheaper.

The history of computing (and indeed human development) has been possible because each new generation of technology has stood on the shoulders of its predecessors. In practical terms, this has been possible because of an ongoing abstraction of complexity. This abstraction has also made it feasible to replace or change the underlying processes, configurations, or even hardware without significantly impacting applications that rely on it. This abstraction eventually became known as an application programming interface (API)—an agreed demarcation point between various components of a system.

Here's an example. Ask the typical 2013 graduate Java developer how a hard disk determines which files, sectors, and tracks to read a file from over a small computer system interface (SCSI) interface. You'll probably get a shrug and "I just call java.io.FileReader and it does its thing." That's because frankly, they don't care. And they don't care because they don't need to. A diligent design and engineering team has provided them with an API call that masks the underlying complexities of talking to a physical disk—reading 1s and 0s from a magnetic track, decoding them and turning them into usable data, correcting any random errors, and ensuring that any errors are handled gracefully (most of the time). That same application is ignorant of whether the file is stored on a SCSI or a SATA (Serial Advanced Technology Attachment) disk, or even over a network connection, because it is abstracted.

If you map this out, your Java application follows steps similar to these through the stack:

1) The Java developer creates the user code.

2) The developer runs the Java function java.io.FileReader.

3) The framework converts the Java function into the appropriate operating system (OS) API call for the OS the code is running on.

4) The operating system receives the API call.

5) The operating system job scheduler creates a job to accomplish the request.

6) The kernel dispatches the job to the filesystem driver.

7) The filesystem driver creates pointers, determines metadata, and builds a stream of file content data.

8) The disk subsystem driver packages file data into a sequence of SCSI bus commands and makes the hardware register manipulations and CPU interrupts.

9) The disk firmware responds to commands and receives data issued over a bus.

10) The disk firmware calculates the appropriate physical disk platter location.

11) The disk firmware manipulates the voltage to microprocessors, motors, and sensors over command wires to move physical disk heads into a predetermined position.

12) The disk microprocessor executes a predetermined pattern to manipulate an electrical pulse on the disk head, and then reads back the resulting magnetic pattern.

13) The disk microprocessor compares the predetermined pattern against what has been read back from the disk platter.

14) Assuming all that is okay, a Successful command is sent back up the stack.

Phew! Most techies or people who have done some low-level coding can probably follow most of the steps down to the microcontroller level before it turns into pure electrical engineering. But most techies can't master all of this stack, and if they try, they'll spend so long mastering it that they won't get any further than some incomplete Java code that can only read a single file but is implemented as an instance of code that doesn't exist outside of a single machine—let alone become the next social networking sensation or solve the meaning of life.

Abstraction allows people to focus less on the nuts and bolts of building absolutely everything from raw materials and get on with doing useful "stuff" using basic building blocks.

Imagine you are building an application that will run across more than one server. You must deal with the complexities of maintaining state of user sessions between multiple servers and applications. If you want to scale this application across multiple datacenters and even continents, doing so adds another layer of complexity related to concurrency, load, latency, and other factors.

So if you have to build and intimately understand the processing from your users’ browser down to the microcontroller on an individual disk, or the conversion of optical or voltage pulses to data flows down cables and telecommunications carrier equipment across the world to move data between your servers, you have a lot of work cut out for you.

Source of Information : VMware Private Cloud Computing with vCloud Director

Berkeley Motes

Labels:

The Berkeley motes are a family of embedded sensor nodes sharing roughly the same architecture.

Let us take the MICA mote as an example. The MICA motes have a two-CPU design. The main microcontroller (MCU), an Atmel ATmega103L, takes care of regular processing. A separate and much less capable coprocessor is only active when the MCU is being reprogrammed. The ATmega103L MCU has integrated 512 KB flash memory and 4 KB of data memory. Given these small memory sizes, writing software for motes is challenging. Ideally, programmers should be relieved from optimizing code at assembly level to keep code footprint small. However, high-level support and software services are not free. Being able to mix and match only necessary software components to support a particular application is essential to achieving a small footprint. A detailed discussion of the software architecture for motes.

In addition to the memory inside the MCU, a MICA mote also has a separate 512 KB flash memory unit that can hold data. Since the connection between the MCU and this external memory is via a low-speed serial peripheral interface (SPI) protocol, the external memory is more suited for storing data for later batch processing than for storing programs. The RF communication on MICA motes uses the TR1000 chip set (from RF Monolithics, Inc.) operating at 916 MHz band. With hardware accelerators, it can achieve a maximum of 50 kbps raw data rate. MICA motes implement a 40 kbps transmission rate. The transmission power can be digitally adjusted by software through a potentiometer (Maxim DS1804). The maximum transmission range is about 300 feet in open space.

Like other types of motes in the family, MICA motes support a 51 pin I/O extension connector. Sensors, actuators, serial I/O boards, or parallel I/O boards can be connected via the connector. A sensor/actuator board can host a temperature sensor, a light sensor, an accelerometer, a magnetometer, a microphone, and a beeper. The serial I/O (UART) connection allows the mote to communicate with a PC in real time. The parallel connection is primarily for downloading programs to the mote.

It is interesting to look at the energy consumption of various components on a MICA mote. A radio transmission bears the maximum power consumption. However, each radio packet (e.g., 30 bytes) only takes 4 ms to send, while listening to incoming packets turns the radio receiver ON all the time. The energy that can send one packet only supports the radio receiver for about 27 ms. Another observation is that there are huge differences among the power consumption levels in the active mode, the idle mode, and the suspend mode of the MCU. It is thus worthwhile from an energy-saving point of view to suspend the MCU and the RF receiver as long as possible.

Source of Information : Elsevier Wireless Networking Complete 2010

Sensor Node Hardware

Labels:

Sensor node hardware can be grouped into three categories, each of which entails a different
set of trade-offs in the design choices.

● Augmented general-purpose computers: Examples include low-power PCs, embedded PCs (e.g., PC104), custom-designed PCs (e.g., Sensoria WINS NG nodes), 1 and various personal digital assistants (PDAs). These nodes typically run off-the-shelf (OTS) operating systems such as Win CE, Linux, or real-time operating systems and use standard wireless communication protocols such as Bluetooth or IEEE 802.11. Because of their relatively higher processing capability, they can accommodate a wide variety of sensors, ranging from simple microphones to more sophisticated video cameras. Compared with dedicated sensor nodes, PC-like platforms are more power hungry. However, when power is not an issue, these platforms have the advantage that they can leverage the availability of fully supported networking protocols, popular programming languages, middleware, and other OTS software.

● Dedicated embedded sensor nodes: Examples include the Berkeley mote family, the UCLA Medusa family, Ember nodes, 2 and MIT μ AMP. These platforms typically use commercial OTS (COTS) chip sets with emphasis on small form factor, low power processing and communication, and simple sensor interfaces. Because of their COTS CPU, these platforms typically support at least one programming language, such as C. However, in order to keep the program footprint small to accommodate their small memory size, programmers of these platforms are given full access to hardware but barely any operating system support. A classical example is the TinyOS platform and its companion programming language, nesC.

● System-on-chip (SoC) nodes: Examples of SoC hardware include smart dust, the BWRC picoradio node, and the PASTA node. 3 Designers of these platforms try to push the hardware limits by fundamentally rethinking the hardware architecture trade-offs for a sensor node at the chip design level. The goal is to find new ways of integrating CMOS, MEMS, and RF technologies to build extremely low power and small footprint sensor nodes that still provide certain sensing, computation, and communication capabilities. Since most of these platforms are currently in the research pipeline with no predefined instruction set, there is no software platform support available.

Source of Information : Elsevier Wireless Networking Complete 2010

Sensor Network Platforms and Tools

Labels:

We discussed various aspects of sensor networks, including sensing and estimation, networking, infrastructure services, sensor tasking, and data storage and query. A real-world sensor network application most likely has to incorporate all these elements, subject to energy, bandwidth, computation, storage, and real-time constraints. This makes sensor network application development quite different from traditional distributed system development or database programming. With ad hoc deployment and frequently changing network topology, a sensor network application can hardly assume an always-on infrastructure that provides reliable services such as optimal routing, global directories, or service discovery.

There are two types of programming for sensor networks, those carried out by end users and those performed by application developers. An end user may view a sensor network as a pool of data and interact with the network via queries. Just as with query languages for database systems like SQL, a good sensor network programming language should be expressive enough to encode application logic at a high level of abstraction, and at the same time be structured enough to allow efficient execution on the distributed platform. Ideally, the end users should be shielded away from details of how sensors are organized and how nodes communicate.

On the other hand, an application developer must provide end users of a sensor network with the capabilities of data acquisition, processing, and storage. Unlike general distributed or database systems, collaborative signal and information processing (CSIP) software comprises reactive, concurrent, distributed programs running on ad hoc, resource-constrained, unreliable computation and communication platforms. Developers at this level have to deal with all kinds of uncertainty in the real world. For example, signals are noisy, events can happen at the same time, communication and computation take time, communications may be unreliable, battery life is limited, and so on. Moreover, because of the amount of domain knowledge required, application developers are typically signal and information processing specialists, rather than operating systems and networking experts. How to provide appropriate programming abstractions to these application writers is a key challenge for sensor network software development. In this chapter, we focus on software design issues to support this type of programming.

To make our discussion of these software issues concrete, we first give an overview of a few representative sensor node hardware platforms. We present the challenges of sensor network programming due to the massively concurrent interaction with the physical world. TinyOS for Berkeley motes and two types of nodecentric programming interfaces: an imperative language, nesC, and a dataflow-style language, TinyGALS. Node-centric designs are typically supported by node-level simulators such as ns-2 and TOSSIM. State-centric programming is a step toward programming beyond individual nodes. It gives programmers platform support for thinking in high-level abstractions, such as the state of the phenomena of interest over space and time.

Source of Information : Elsevier Wireless Networking Complete 2010

Ad Hoc Wireless Sensor Networks

Labels:

Advances in microelectronics technology have made it possible to build inexpensive, low-power, miniature sensing devices. Equipped with a microprocessor, memory, radio, and battery, such devices can now combine the functions of sensing, computing, and wireless communication into miniature smart sensor nodes , also called motes . Since smart sensors need not be tethered to any infrastructure because of on-board radio and battery, their main utility lies in being ad hoc, in the sense that they can be rapidly deployed by randomly strewing them over a region of interest. Several applications of such wireless sensor networks have been proposed, and there have also been several experimental deployments. Example applications are:

● Ecological Monitoring: wild-life in conservation areas, remote lakes, forest fi res.

● Monitoring of Large Structures: bridges, buildings, ships, and large machinery, such as turbines.

● Industrial Measurement and Control: measurement of various environment and process parameters in very large factories, such as continuous process chemical plants.

● Navigation Assistance: guidance through the geographical area where the sensor network is deployed.

● Defense Applications: monitoring of intrusion into remote border areas; detection, identification, and tracking of intruding personnel or vehicles.

The ad hoc nature of these wireless sensor networks means that the devices and the wireless links will not be laid out to achieve a planned topology. During the operation, sensors would be difficult or even impossible to access and hence their network needs to operate autonomously. Moreover, with time it is possible that sensors fail (one reason being battery drain) and cannot be replaced. It is, therefore, essential that sensors learn about each other and organize into a network on their own. Another crucial requirement is that since sensors may often be deployed randomly (e.g., simply strewn from an aircraft), in order to be useful, the devices need to determine their locations. In the absence of a centralized control, this whole process of self-organization needs to be carried out in a distributed fashion. In a sensor network, there is usually a single, global objective to be achieved. For example, in a surveillance application, a sensor network may be deployed to detect intruders. The global objective here is intrusion detection. This can be contrasted with multihop wireless mesh networks , where we have a collection of source – destination pairs, and each pair is interested in optimizing its individual performance metric. Another characteristic feature of sensor networks appears in the packet scheduling algorithms used. Sensor nodes are battery-powered and the batteries cannot be replaced. Hence, energy-aware packet scheduling is of crucial importance.

A smart sensor may have only modest computing power, but the ability to communicate allows a group of sensors to collaborate to execute tasks more complex than just sensing and forwarding the information, as in traditional sensor arrays. Hence, they may be involved in online processing of sensed data in a distributed fashion so as to yield partial or even complete results to an observer, thereby facilitating control applications, interactive computing, and querying. A distributed computing approach will also be energy-efficient as compared to mere data dissemination since it will avoid energy consumption in long haul transport of the measurements to the observer; this is of particular importance since sensors could be used in large numbers due to their low cost, yielding very high resolutions and large volumes of sensed data. Further, by arranging computations among only the neighboring sensors the number of transmissions is reduced, thereby saving transmission energy. A simple class of distributed computing algorithms would require each sensor to periodically exchange the results of local computation with the neighboring sensors. Thus the design of distributed signal processing and computation algorithms, and the mapping of these algorithms onto a network, is an important aspect of sensor network design.

Design and analysis of sensor networks must take into account the native capabilities of the nodes, as well as architectural features of the network. We assume that the sensor nodes are not mobile . Further, nodes are not equipped with position-sensing technology , like the Global Positioning System (GPS). However, each node can set its transmit power at an appropriate level — each node can exercise power control . Further, each node has an associated sensing radius ; events occurring within a circle of this radius centered at the sensor can be detected.

In general, a sensor network can have multiple sinks, where the traffic generated by the sensor sources leaves the network. We consider networks in which only a single sink is present. Further, we will be concerned with situations in which sensors are randomly deployed . In many scenarios of practical interest, preplanned placing of sensors is infeasible, leaving random deployment as the only practical alternative; e.g., consider a large terrain that is to be populated with sensors for surveillance purposes. In addition, random deployment is a convenient assumption for analytical tractability in models. Our study will also assume a simple path loss model , with no shadowing and no fading in the environment.

Source of Information : Elsevier Wireless Networking Complete 2010

MOBILE WEB CONSIDERATIONS

Labels:

What makes a good mobile website? This is an impossible question to answer, because design and taste are always highly subjective matters. But certain considerations are worth bearing in mind from the start, and these considerations will undoubtedly help you create positive user experiences for your mobile users.



Recognizing Mobile Users
It should go without saying that the most important aspect to developing a mobile website is to ensure that it is available and easy to reach! This sounds straightforward, of course, but it can
actually become relatively involved: It ’ s a fair assumption that existing site owners are very careful to promote and use their current website URL consistently. If you want to create a separate site for your mobile users, should it be a different URL? Should it appear on the same URL? If so, how does the server or CMS know whether to present one type of site or another? How can you cater to user choice and potentially let them switch back and forth between your desktop and mobile sites? How can you publicize the (attractive) fact that the mobile site exists at all? And ensure that it is correctly listed in search engines and directories?

There are glib answers to all these questions, but each has a level of subtly to it, and no matter which technique you use for hosting, selecting, and publicizing your mobile presence, it is inevitable that you will have to distinguish between mobile users and desktop users. In reality, this means detecting between mobile and desktop browsers and then providing different sites or templates accordingly. Users find content in the strangest ways, and it remains the site owner ’ s responsibility to ensure that the right type of experience is given to each type of user. You look at a number of techniques for doing this, both in the general sense, but also for specific content management systems.



Thematic Consistency
A web standards body, the W3C, uses the term thematic consistency . This is not, as you may think, related to themes or the cosmetics of a site, but to the fact that according to the body ’ s “ One Web ” philosophy, the whole Web should be accessible from any device — so given a specific URL, any browser should receive the same content.

This is not to say that the same content should look the same (because the theming of a mobile
web page can be often very different to that of its equivalent desktop page), nor even that users on different devices want to see the same content (because they are quite possibly in a different context, looking for possibly very different things).

But the One Web philosophy is valuable and important, and indeed URLs should always be used in a way that honors the Uniform adjective of the acronym. It would be counterproductive for the
whole mobile web movement if it were to become a disconnected ghetto of content targeted at one type of device alone and did not share links, resources, and content with the vast existing Web.
When you are building your mobile website, think carefully about how its information architecture is sympathetic to this: The same posts, pages, articles, products, and so forth should be easily and consistently accessible from all types of browsers, even if their appearance and surrounding user - interface may be radically different.



Brand Consistency
It is also important to ensure that your own website ’ s brand is preserved between its mobile and desktop versions. There should be consistency between the theming, color schemes, and the look and feel of different types of sites. If your desktop site is light and airy and your mobile site is dark and cluttered, you will confuse your users, many of whom may remember what your desktop site looks like and may find it hard to correlate that with the mobile site, damaging their trust in your brand or site.

The same goes for your logo, color scheme, feature images, graphical elements and so on; within reason you should endeavor to reuse as much as possible between the two sites. Users need to feel that they are interacting with the same brand while being given an entirely optimized, mobile - centric experience.

Similarly, if your desktop site is renowned for a simple, cheerful, and highly efficient user interface and experience, your mobile users will expect the same of the mobile site. Indeed, due to its constraints, a mobile website obviously needs to have even more attention paid to its usability!



A Dedication to Usability
With limited real estate (both physically and in terms of pixels) and often very different input
techniques — not to mention the fact that users may be in a more time - sensitive context, and with a slower network connection — a mobile device needs to be presented with a site interface that is as efficient to use as possible. At the very least, consider carefully any use of excessive forms, multi - paged wizards to complete common or simple tasks, or complex menus to get to critical functionality. These are not likely to be appreciated by mobile users.

Instead, think hard about what mobile users want to do, and ensure that those critical tasks are as heavily optimized as possible on the mobile version of the interface. Arguably this was one of the big successes of the “ native app ” phenomenon: Although many apps were little more than
views of a company ’ s existing web content, the app paradigm allowed interface designers to think entirely afresh about how it could be accessed. The popular pattern of a toolbar at the bottom of an app ’ s screen with four or five important tasks that can be reached with a thumb seems a long way from the lengthy and complex menu bar across the top of a website, but it shows that the same information architecture and fundamental functionality can always be accessed using different user interface techniques. Think hard about which techniques work best for the new medium and types of devices you are targeting.



Remember Mobile
Finally, remember the point about the mobile device being so much more than merely a browser on a small screen. Yes, it ’ s phone, an address book, a game console, and so on, but it ’ s also a device that is in the user ’ s hand nearly every hour of the day, a device that brings unique capabilities and possibilities for you to design to.

Never forget that your mobile is an adjective, not a noun. The important point about the mobile
web is not that the user is holding a mobile phone, but that she is mobile. Make the most of the fact that the visitors to your website don ’ t just have a small screen, rather they are out and about in the real world, living their lives, staying connected — and they want to access everything you have to offer, whenever they want it, in a wonderful mobile way.

Source of Information : Wiley - Professional Mobile Web Development with WordPress Joomla and Drupal

Alltop, all the top stories
BlogMalaysia.com
All Malaysian Bloggers Project
Computer Blogs - BlogCatalog Blog Directory Add to Technorati Favorites
Technorati Profile
Top Computers blogs