BlackBerry Application Development - Two different approaches to application development

If you've visited the BlackBerry Developer website you may have noticed that there are two recommended approaches to developing applications for BlackBerry handhelds—Java Application Development and BlackBerry Web Development. This book is focused on the Java Application Development approach, which is the more versatile of the two, but the other can be very useful in the right situation.

• Java Application Development approach: This is the most powerful approach and it creates applications written in Java that are loaded onto and executed on a BlackBerry handheld. They will be the focus of this book and are one of the most common ways to deploy an application. Two different tools exist to support this approach—the BlackBerry Java Development Environment (JDE) and the BlackBerry JDE Component Plug-in for Eclipse. Both offer the ability to create full custom applications. The BlackBerry JDE is a custom application written in Java that can be used to develop applications. The latter leverages the Eclipse Integrated Development Environment (IDE), which is a common platform for Java developers.

• BlackBerry Web Development approach: It is the other approach that runs completely within the BlackBerry Browser application and can use various standards such as HTML and AJAX. Applications created using the BlackBerry Web Development approach are similar to more common web applications and generally require network connectivity to work. More powerful features, including native API calls, aren't allowed natively but can be made using BlackBerry Widgets. BlackBerry Widgets is a separate SDK for creating small applets that can be leveraged by web applications. Overall, this approach can be powerful but it requires network connectivity, which potentially means data charges and/or delays for network communication.

Source of Information : Packt - BlackBerry Java Application Development Beginners Guide 2010


The WiMax Forum harmonizes IEEE 802.16 and ETSI HIPERMAN into a WiMax standard. The core components of a WiMax system include the subscriber station (SS), also known as the customer premise environment (CPE), and the base station (BS). A BS and one or more CPEs can form a cell with a point-to-multipoint (PTM) structure, in which the BS acts as central control over participating CPEs. The WiMax standard specifies the use of licensed and unlicensed bands within the 2- to 11-GHz range, allowing non-LOS (NLOS) transmission, which is highly desired for wireless service deployment, as NLOS does not require high antennas in order to reach remote receivers, which reduces site interference and the deployment cost of CPE. NLOS raises multipath transmission issues such as signal distortion and interference. WiMax employs a set of technologies to address these issues:

» OFDM : As discussed earlier in this chapter, OFDM uses multiple orthogonal narrowband carriers to transmit symbols in parallel, effectively reducing ISI and frequency-selective fading.

» Subchannelization : The subchannelization of WiMax uses fewer OFDM carriers in the upstream link of a terminal, but each carrier operates at the same level of the base station. Subchannelization extends the reach of upstream signals from a terminal and reduces its power consumption.

» Directional antennas : Directional antennas are advantageous in fi xed wireless systems because they are more powerful in picking up signals than are omnidirectional antennas; hence, a fixed CPE typically uses a directional antenna, while a fixed BS may use directional or omni directional antennas.

» Transmit and receive diversity : WiMax may optionally employ a transmit and receive
diversity algorithm to make use of multipath and refl ection using MIMO radio systems.

» Adaptive modulation : Adaptive modulation allows the transmitter to adjust modulation schemes based on the SNR of the radio links. For example, if the SNR is 20 dB, 64 QAM will be used to achieve high capacity. If the SNR is 16 dB, 16 QAM will be used, and so on. Other NLOS schemes of WiMax, such as directional antenna and error correction, are also used.

» Error-correction techniques : WiMax specifi es the use of several error-correction codes and algorithms to recover frames lost due to frequency-selective fading or burst errors. These codes and algorithms are Reed Solomon FEC, convolutional encoding, interleaving algorithms, and Automatic Repeat Request (ARQ) for frame retransmission.

» Power control : In a WiMax system, a BS is able to control power consumption of CPEs by sending power-control codes to them. The power-control algorithms improve overall performance and minimize power consumption.

» Security : Authentication between a BS and an SS is based on the use of X.509 digital certificates with RSA public key authentication. Traffi c is encrypted using Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) which uses Advanced Encryption Standard (AES) for transmission security and data integrity authentication. WiMax also supports Triple Data Encryption Standard (3DES).

Initially, the WiMax Forum has focused on fi xed wireless access for home and business users using outdoor antennas (CPEs), and indoor fi xed access is under development. A base station may serve about 500 subscribers. WiMax vendors have begun to test fi xed wireless broadband access in metropolitan areas such as Seattle. Due to its relatively high cost, the major targets of this technology are business users who want an alternative to T1, rather than residential home users. A second goal of the forum is to address portable wireless access without mobility support, and another is to achieve mobile access with seamless mobility support (802.16e). Recall that a Wi-Fi hotspot offers wireless LAN access within a limited coverage of an AP; the WiMax Forum plans to build MetroZones that allow portable broadband wireless access. A MetroZone comprises base stations connected to each other via LOS wireless links, and 802.16 interfaces for laptop computers or PDAs that connect to the “ best ” base station for portable data access. This aspect of WiMax seems more compelling in terms of potential data rate compared with 3G cellular systems.

Like the Wi-Fi forum, the WiMax forum aims at providing certifi cation of WiMax products in order to guarantee interoperability. In March 2005, Alvarion, Airspan, and Redline began to conduct the industry’s fi rst WiMAX interoperability test. WiMax chips for fixed CPEs and base stations developed by Intel will be released in the second half of 2005, and WiMax chips for mobile devices will be released in 2007. At the time of this writing, some WiMax systems were expected to go into trial operation in late 2005.

Source of Information : Elsevier Wireless Networking Complete 2010

BlackBerry Application Development - General device capabilities

BlackBerry handhelds, like many smartphones today, are very powerful in spite of their small size. The processing power and capabilities of these handhelds could accurately be described as smaller versions of our desktops or laptops. They have many strong capabilities yet have a small size that makes them convenient to carry around. This combination makes smartphones in general, and BlackBerry handhelds in particular, well suited for on-the-go applications.

But just what can they do? There are so many possibilities! Let's take a look at the general capabilities of BlackBerry handhelds.

• Every handheld has a keyboard designed for typing on it. BlackBerry handhelds have always been specifically designed to send and receive e-mail, and as a result, the keyboards are well-suited for entering free-form data. The BlackBerry SDK offers no less than ten different kinds of text fields that can be used in nearly any kind of application. Plus, if you need something special, you can always create your own!

• Another area that BlackBerry handhelds excel at is network connectivity. Again, this is by design in order to provide excellent e-mail service. This connectivity includes fully encrypted TCP/IP communication, and the ability to receive as well as send raw data. Whether it is HTTP or UDP, the BlackBerry SDK supports all of the major networking protocols and can handle receiving as well as sending data. Furthermore, you can leverage the same secure protocols that are used to deliver e-mail.

• Most applications will need to store data on the local device. Applications can, of course, store data on the device in their own private stores, but they can also access and interface with other applications on the handheld. These include the pre-installed applications such as messages, address book, and calendar that come with the handheld.

• Cameras are nearly ubiquitous on smartphones and can be accessed by an application as well.

• Many newer devices include removable memory card slots for storage of large media files. Applications can access this storage as well to give applications with large storage needs the room to work.

• Another feature that is extremely common on handhelds is a GPS receiver that enables location-based service (LBS). This is one area that many in the smartphone industry will say holds the most promise for the future.

Source of Information : Packt - BlackBerry Java Application Development Beginners Guide 2010


Any new technology brings with it changes to the process. Virtualization is no exception. Although there are challenges, they are usually outweighed by the benefits—assuming you understand and address them up front. The following is a list of common issues which must be considered when developing a virtualization strategy.

» License management. It is somewhat easy to track operating system and application licenses in a traditional datacenter or across user desktops. However, licensing in a virtualized world is different and often confusing. Make sure you understand how your vendors license virtual instances of their products and ensure your engineers adhere to licensing policy. It is very easy to bring up VMs without thinking about license availability.

» New skill sets. Configuring, monitoring, and managing virtualized environments require skills not typically found in in-house resources. This is a challenge easily met with training and new hiring requirements.

» Support from application vendors. The big question? Will your application vendor support its software within your selected virtual environment? Does the application even run virtualized? Does the vendor know or care?

» Additional complexity. It should not be a surprise that virtualization adds another layer of complexity to your infrastructure.

» Security. Security on VMs is not very different from standard server security. However, the underlying layers (i.e., the hypervisor and related services) require special consideration, including adjustments to antivirus solutions. Apart from technology differences, the ease with which engineers can build VMs can result in explosive growth of unplanned, unmonitored, and insecure servers. Make sure your change management process is adjusted, policies updated, and staff trained on what is and is not acceptable behavior.

» Image proliferation. This might not be a bad thing unless the images you keep on the virtual shelf are rife with weak configurations or other challenges you might not want spreading like a disease across your datacenter.

» Ineffectiveness of existing management and monitoring tools. As we hinted in the security bullet, your tried and true monitoring and management tools might not include the intricacies of virtualization management.

» Inability of the LAN/WAN infrastructure to support consolidated servers. What happens to your switch when you replace several single traditional servers with one or more beefy hardware platforms running multiple VMs? If you can not answer this or other similar questions, you are not quite ready to make the leap to virtualization.

And these are just the thought-provoking issues we could come with. You may have your own set, which reflect the unique way you do business.

Source of Information : Elsevier-Microsoft Virtualization Master Microsoft Server Desktop Application and Presentation

Virtualization and business continuity

Business continuity is an important consideration in system design, including both system failures and datacenter destruction scenarios—and everything in between. Traditional system recovery documentation provides instructions for rebuilding a system using the hardware which is no longer accessible or operational. The problem is that there are usually no guarantees your disaster recovery or hardware vendors will be able to duplicate the original hardware.

Using different hardware can result in extended rebuild times as you struggle to understand why your applications do not function. Even if you can get the same hardware, you need to rebuild the environment from the ground up.

Finally, interruptions in business processes occasionally happen when systems are brought down for maintenance. You understand the necessity, but your users seldom do.

Virtualization provides advantages over traditional recovery methods, including:

» Breaking hardware dependency. Since the hypervisor provides an abstraction layer between the operating environment and the underlying hardware, you do not need to duplicate failed hardware to restore critical processes.

» Increased server portability. If you create virtual images of your critical system servers, it does not matter what hardware you use to recover from a failure—as long as the recovery server supports your hypervisor and, if necessary, the load of multiple child partitions. Enhanced portability extends to recovering critical systems at your recovery test site, using whatever hypervisor-compatible hardware is available.

» Elimination of sever downtime (almost). You may never reach the point at which maintenance downtime is eliminated, but virtualization can get you very, very close. Because of increased server portability, you can shift critical virtual servers to other devices while you perform maintenance on the production hardware. You can also patch or upgrade one partition without affecting other partitions. One way to accomplish this is via clustering, failing over from one VM to another in the same cluster. From the client perspective, there is no interruption in service—even during business hours.

» Quick recovery of end-user devices. When a datacenter goes, the offices in the same building often go as well. Further, satellite facilities can suffer catastrophic events requiring a complete infrastructure rebuild. The ability to deliver desktop operating environments via a centrally managed virtualization solution can significantly reduce recovery time.

It might seem that virtualization is an IT panacea. It is true that it can solve many problems, but it also introduces new challenges.

Source of Information : Elsevier-Microsoft Virtualization Master Microsoft Server Desktop Application and Presentation


Transitioning from a traditional computing environment to one based on strategic use of virtualization is not free. New servers are usually required to support multiple VMs or to implement, manage, or monitor App-V rollouts. And let us not forget training for IT staff. So why should management shift dollars from other projects to fund virtualization? Virtualization provides a long list of benefits to the business, including:

» Consolidation of workload to fewer machines. Server consolidation is usually one of the first benefits listed when IT begins to discuss virtualization. Although a definite benefit, you will probably only virtualize a subset of your datacenter—for reasons which will become obvious—resulting in limited ROI.

» Optimized hardware use. Most servers are underutilized. Placing multiple VMs on expensive server hardware drives processor, memory, disk, and other resources closer to recommended utilization thresholds. For example, instead of an application server using only 5-10% of its processing capability, multiple application servers on the same platform can drive average processors up to 40% or 50%. This is much better use of invested hardware dollars.

» Running legacy applications on new hardware. Any organization which has been around a few years has old applications it can not live without. Rather, it has applications its users must have or civilization as we know it will collapse. As the software stands fast, and hardware and operating systems evolve, you might find it difficult or impossible to run legacy applications on replacement platforms. Server and client virtualization provide opportunities to continue to run older environments on hardware with which they are incompatible. This is possible due to the abstraction of operating environments from the underlying hardware components.

» Isolated operating environments. Have you ever needed to run two versions of an application at the same time on the same device? If so, isolated environments are a great way to facilitate this. Further, each operating environment can have its own registry entries, code libraries, etc. So application incompatibilities are rare. Finally, failures or corruption in one environment will not affect other applications or data. Isolated environment capabilities in App-V can sometimes be a bigger selling point than server consolidation.

» Running multiple operating systems simultaneously. You do not have to make the leap to Linux to have the need to run multiple server operating systems. Most organizations do not upgrade all servers to the latest version of Windows Server at the same time. So there are often various versions in the datacenter, running critical applications. Hyper-V partitioning allows you to consolidate servers running operating systems at various version or patch levels, without the risk of incompatibilities. If you are gradually introducing other operating systems into the datacenter, they can all happily coexist with current operating systems—in “sibling” partitions on the same hardware platform.

» Ease of software migration. Application streaming, coupled with isolated operating environments, makes end-user application deployment much easier. Using Hyper-V, new application rollouts or upgrades to existing applications are easy and centrally managed.

» Quick buildup and tear-down of test environments. Testing is a big part of any internal development process, but rapid test environment builds are difficult to achieve. With virtualization, engineers create virtual image files which are quickly deployed when relevant system testing is required. Image files are also a great way to refresh a test environment when changes do not quite work as expected.

We believe this list represents the major reasons why an organization would want to move to virtualization, except for one. The final reason, improved business continuity, is so important we decided to give it special attention.

Source of Information : Elsevier-Microsoft Virtualization Master Microsoft Server Desktop Application and Presentation


Take a minute or two and look at the traffic volume for the keywords “social media” and “search engine optimization” on Google Trends ( We’ll give you a hint: “social media” is a very popular search term.

What you will find is that social media clearly owns a lofty position in the minds of search marketers. But even though social media dominates the conversation today and increasingly warrants the attention and investments of Web professionals, the influence and importance of SEO remains vital to an enterprise’s bottom line. What has changed in SEO are the tools available to search professionals — including social media properties. And while social might be picking up steam, search has never slowed … ever. For these reasons, it is imperative that your SEO agency is ready, willing and able to serve in the age of more integrated Internet marketing.

In the process of selecting a SEO agency or evaluating your current agency, you will find a crowded marketplace — and nearly as many social media experts. The industry is maturing and bringing new channels and opportunities for consumer engagement and revenue. This is driving traditional SEOs to adopt new approaches to acquiring traffic. Today, social media, paid search and mobile applications are forcing agencies to adopt new practices. In fact, finding a provider that focuses exclusively on SEO and not social media or advertising is increasingly rare.

Website Magazine’s Top 50 SEO Agencies provides a meaningful look into these companies with the aim to help narrow the field of some of the best choices for your business. Let this list act as a valuable resource should you be in pursuit of greater success in your SEO campaigns.


Website Magazine’s Top 50 Rankings are measures of a website’s popularity. Ranks are calculated using a proprietary method that focuses on average daily unique visits and page views over a specified period of time, as reported by multiple data sources. The website with the highest combination of factors is ranked in the first position. Conducting research, making formal comparisons and talking to existing clients and users before making any purchase decision is always recommended.

Source of Information : Website Magazine for March 2011

The many complications and risks of tape

Magnetic tape technology was adopted for backup many years ago because it met most of the physical storage requirements, primarily by being ...