Search! What You See

Monday, December 21, 2009

HP to Open Tech Support Call Centers in Africa

Even though India, Singapore, Mexico or the Philippines are generally more appealing as locations for technological support call centers, HP seems to want to distinguish itself from the fold by building such centers in northern African countries.
The company has not revealed how many facilities it plans to open but has disclosed that it is set to hire at least 1,000 people by the end of 2010.HP seems to have chosen a rather difficult period for founding such a center, especially since customers don't exactly enjoy having to work with foreign tech support.
In addition, the actual consumer segment will likely be more prone to harsh criticism considering the high unemployment, especially in the United States.
Still, HP doesn't appear to be overly concerned, at least not enough as to follow the flock and build its tech support centers in India, Singapore, Mexico or the Philippines."We see Africa as a potential base for providing all sort of services and technical support for customers outside of Africa," said Rainer Koch, HP Managing Director, in a statement to Reuters.
"We plan to invest more in the future on the continent on that perspective."HP has not revealed which countries it is considering for the establishment of its centers, but it will likely have to choose wisely considering the many w...

HTC HD2 to Arrive at T-Mobile USA

HTC HD2, the most appealing handset running under Microsoft's Windows Mobile operating system that the Taiwanese mobile phone maker HTC Corporation delivered to the market during the ongoing year, is expected to arrive on the US market in early 2010, and T-Mobile might be the carrier to have it.
HTC hasn't stated anything official in this direction, although it promised that HD2 would be available globally; yet, a recently leaked pre-release ROM for the handset shows that the operator will have it.
According to a recent article on WMExperts, the HTC HD2 is indeed on its way to T-Mobile in the United States, as showed in the .nbh file from the aforementioned ROM.
Moreover, it seems that there are also some more goodies that T-Mobile's users can expect for this behemoth to bring around, including Leo ROM 2.01 (far updated from the recently leaked ROMs for the handset, it seems), and the WM 6.5 build 21869.
[img=2]At the same time, the ROM also shows that HTC HD2 is set to arrive stateside with Opera 9.7.0.35627 on board, Teeter 2.0, TMOUS_Manila_Core 2.5.1921401, and there is also reference to OzIM_US_1.0.5.1.139.
One way or the other, if this rumor pans out, T-Mobile USA will put on sale a handset clearly upgraded when compared to the version that is already available for purchase in a series of markets around the world.

Monday, November 23, 2009

1 in 3 laptops die in first three years

So your new laptop computer died in inside of a year. "I'll never buy a computer from [insert manufacturer name here] again!" I've heard the protests time and time again.

Yeah, maybe you got a lemon, but no matter which brand you bought, you truly are not alone in this situation: An analysis of 30,000 new laptops from SquareTrade, which provides aftermarket warranty coverage for electronics products, has found that in the first three years of ownership, nearly a third of laptops (31 percent) will fail.

That's actually better than I would have expected based on my experience and observations on how people treat their equipment.

SquareTrade has more detailed information (the full PDF of the company's study is available here) on the research on its website. But here are some highlights about how, why, and which laptops fail:

> 20.4 percent of failures are due to hardware malfunctions. 10.6 percent are due to drops, spills, or other accidental damage.

> Netbooks have a roughly 20 percent higher failure rate due to hardware malfunctions than standard laptops. The more you pay for your laptop, the less likely it is to fail in general (maybe because you're more careful with it?).

> The most reliable companies? A shocker: Toshiba and Asus, both with below a 16 percent failure rate due to hardware malfunction.

> The least reliable brands? Acer, Gateway, and HP. HP's hardware malfunction rate, the worst in SquareTrade's analysis, is a whopping 25.6 percent.

None of the numbers are overly surprising. As SquareTrade notes, "the typical laptop endures more use and abuse than nearly any other consumer electronic device (with the possible exception of cell phones)," so failures are really inevitable.

Want to keep your notebook running for longer than a few years? Ensure your laptop is as drop-proofed as possible (use a padded bag or case, route cords so they won't be tripped on, lock children in another room), and protect it as best you can from heat and dust.

Friday, November 13, 2009

The DSi grows up

Following rumours claiming that Nintendo was going to release a new version of its portable console, the Japanese manufacturer decided to act quickly to confirm the information. The new version is a redesign of the DSi model equipped with a larger screen. Fittingly, it will be called the Nintendo DSi XL.

The console will be available in Japan from the end of November and should reach other markets during the first quarter of 2010. It's nice to think that players on the move will be able to enjoy greater visual comfort, isn't it?

4G on the horizon!

We are marching slowly but surely towards fourth generation mobile telephony. This spring, Ericsson announced the installation of the first area of LTE networks in Stockholm. This name applies to the technology that will establish the specifications of future very high speed connections. The next step is now being taken with the development of USB devices to enable laptops to connect to the Swedish LTE network. That should be possible from March 2010.

The South Korean company Samsung is working on the project directly with Ericsson and will supply the accessories, which are being labelled as 4G. However, in reality it might be more appropriate to describe this technology as "proto-4G". Indeed, the real next-generation mobile network is not expected to come into operation until the end of 2011.

Monday, November 2, 2009

Microsoft CEO: IT spending won't fully recover

SEOUL, South Korea (AP) -- Microsoft CEO Steve Ballmer said Monday corporate spending on information technology will not recover to levels seen in recent years before the global economic slowdown.

"The economy went through a set of changes on a global basis over the course of the last year which are, I think is fair to say, once in a lifetime," Ballmer told a meeting of South Korean executives in Seoul.

Spending on information technology, which accounted for about half of capital expenditures in developed countries before the crisis, was unlikely to rebound fully because capital was more scarce these days, he said.

"While we will see growth, we will not see recovery," Ballmer said.

Ballmer was in Seoul to meet corporate and government officials and tout the Redmond, Washington-based company's new Windows 7 operating system. The latest edition of Windows, the software that runs personal computers, was released last month.

He said company purchases of PCs and servers were down about 15 percent globally.

"It reflects the fact that CEOs have much more tightly constrained IT budgets," he said.

Separately, South Korean technology giant Samsung Electronics Co. said it will work with Microsoft to find ways to make computers more energy efficient.

The announcement followed a meeting between Ballmer and Samsung CEO Lee Yon-woo. The company also said it will upgrade its corporate PCs worldwide with Microsoft's new operating system next year.

http://finance.yahoo.com/news/Microsoft-CEO-IT-spending-apf-97932053.html?x=0

Saturday, October 17, 2009

Bell Labs Breaks the 100 Petabit Per/Sec km Barrier‏

Alcatel-Lucent has announced that scientists in Bell Labs, the company's research arm, have set a new optical transmission record of more than 100 Petabits per second.kilometer (equivalent to 100 million Gigabits per second.kilometer). This transmission experiment involved sending the equivalent of 400 DVDs per second over 7,000 kilometers, roughly the distance between Paris and Chicago.

This is the highest capacity ever achieved over a transoceanic distance and represents an increase that exceeds that of today's most advanced commercial undersea cables by a factor of ten. To achieve these record-breaking results the Bell

Labs researchers made innovative use of new detection techniques and harnessed a diverse array of technologies in modulation, transmission, and signal processing

High speed optical transmission is a key component of Alcatel-Lucent's High Leverage Network architecture, key elements of which have already been selected by leading service providers.

To achieve these record-breaking results researchers from the Bell Labs facility in Villarceaux, France used 155 lasers, each operating at a different frequency and carrying 100 Gigabits of data per second, to dramatically enhance the performance of standard Wavelength Division Multiplexing (WDM) technology.

"There is no question that this record breaking transmission is a milestone in achieving the network capacity and speeds and a key step forward in satisfying the ongoing explosion in demand," said Gee Rittenhouse, head of Bell Labs Research. "This is a prime example of Bell Labs preeminent research and demonstrates the ability of our researchers to solve complex problems," he explained.
The record-breaking figure was derived by multiplying the number of lasers by their 100 Gigabit per second transmission rate and then multiplying the aggregate 15.5 Terabit per second result by the 7000 kilometer distance achieved. The combination of speed and distance expressed in bit per second.kilometers is a standard measure for high speed optical transmission.
The transmissions were accomplished over a network whose repeaters, devices used to sustain optical signal strength over long distances, were spaced 90 kilometers apart. This spacing distance is 20% greater than that commonly maintained in such networks. The challenge of maintaining transmission over these distances was significantly heightened in these experiments because of the noise -perturbation of signals- that is introduced as transmission speeds increase.
The researchers also increased capacity by interfacing advanced digital signal processors with coherent detection, a new technology that makes it possible to acquire details for a greater number of properties of light than the direct detection method commonly applied in today's systems. Using this technique the researchers were able to effectively increase capacity by increasing the number of light sources introduced into a single fiber yet still separate the light into its constituent colors when it reached its destination.

Tuesday, September 29, 2009

The sad truth about today’s Internet population

The world isn’t a fair place, and yet another way this is laid bare is the huge differences shown in Internet penetration among the population of the various world regions. We thought it would be interesting to see what kind of an effect this is having on the world Internet population of today.

First some quick observations before we head on to more charts, just to give you an idea of how level the playing field is NOT:

  • Today’s world Internet penetration is 24.7%.
  • North America, with only 5% of the world’s population, has 15% of the world Internet population thanks to having the world’s largest Internet penetration with 73.9% (largest for a world region, but there are individual countries with a higher percentage).
  • If Asia had the same Internet penetration as North America, it would have 2.81 billion Internet users. That’s 1.7 times the current Internet population of the entire world (1.67 billion).
  • If the entire world had the same Internet penetration as North America, the world Internet population would be 5 billion people.

There are lots of interesting “what if” scenarios like the above, but we don’t want to get too carried away or we’ll never get to the main part of this post, which is to look at how the differences in Internet penetration have affected the division of the Internet population as a whole.

World population share vs. Internet population share

What happens when you compare how the actual world population is divided with how the world’s Internet users are divided?

First off, here below is a pie chart that shows the share each world region has of the planet’s 6.77 billion people. This is also what the division of the Internet population would look like if the Internet penetration were equal all over the world, but of course it isn’t.

Now let’s look at the ACTUAL division of the world Internet population.

Thanks to the different levels of Internet penetration the balance shifts drastically. This is the share each region has of the world’s 1.67 billion Internet users:

Look at how Africa dwindles, look at how Asia shrinks. Giants in terms of population, but the weight of those populations are greatly diminished online. In contrast, regions like Europe and North America swell past the actual weight of their populations. You could say the Internet is a different world, one where the population truly reflects the different levels of industrialization in the world.

We could throw more statistics your way, but we’ll call it a day this time.

It’s when we see this kind of data that we really count ourselves lucky to live in Sweden, which has an Internet penetration of 80% and decent broadband connections for more or less anyone who wants it.

(Oops, another statistic . . .)

Saturday, September 19, 2009

End-to-end 4G networks

WiMAX provides an immediate solution to deliver broadband services to areas poorly served or not served at all by existing telecom infrastructure. In particular, in developing regions many new second and third tier operators can deploy WiMAX cost effectively to serve residential and business users with broadband services including internet access, voice over IP and video on demand.

The StarMAX portfolio of WiMAX Forum Certified wireless broadband access products provides a comprehensive end-to-end WiMAX solution that fully integrates and optimizes WiMAX access, wireless backhaul and complementary core network elements. Together with comprehensive management systems and a complete suite of turnkey network services, Harris Stratex Networks can deliver a properly engineered and fully deployed WiMAX network, providing operators with everything that they need, including the highest level of service quality and top-tier customer service that they may not find from other vendors.

Harris Stratex now provides a comprehensive end-to-end WiMAX solution that fully integrates and optimizes WiMAX access, wireless backhaul and complementary core network elements. Together with comprehensive management systems and a complete suite of turnkey network services, Harris Stratex Networks can deliver a properly engineered and fully deployed WiMAX network, providing operators with everything that they need, including a level of service, quality and attention that they may not find from other vendors.

Our solution consists includes best-of-class components to provide a complete WiMAX ecosystem:

  • Market leading and industry-standards compliant WiMAX base station and subscriber units for fixed, nomadic and mobile applications, including 450 MHz support for cost effective deployments in sparsely populated rural areas.
  • Advanced Carrier Ethernet over Wireless for high capacity IP backhaul with scalable link speeds up to 1.5 Gbit/s and intelligent Layer 2 Ethernet features.
  • Best-of-class core-network elements that enable deployment of large scalable networks of mobile, nomadic and fixed WiMAX services, including ASN (Application Services Network) and CSN (Connectivity Services Network).
  • Our Wireless Services Gateway (WSG) is an innovative solution that combines field-proven IP routing with ASN functions for a radio-agnostic converged IP core network solution for any GSM, GPRS, HSPA, WiMAX and LTE network.
  • Integrated Network Management system (INMS), based on our NetBoss™ service assurance platform.
  • Complete portfolio of turnkey services, including business consulting, network design, site and project engineering, field deployment, network integration, provisioning and optimization, maintenance and managed network services.

ASN-GW configuration

The entire WiMAX network under the same ASN GW must have the same IP subnet. So, all WiMAX interfaces for the BS nodes under the same ASN GW must have an IP address configured under the same IP subnet.
There are two steps for ASN-GW configuration:

Step 1: Configure IP routing parameters
Step 2: Configure GRE tunnel information

IP Routing configuration
1. Right-click on the Router and click edit
2. Click on the IP routing parameters drop down menu
3. In the IP Routing Parameters table, click on Interface information and configure IP address and subnet mask under “Address” (192.0.1.2) and “Subnet Mask” (255.255.255.0)

GRE Tunnel Configuration
1. Repeat steps 1 and 2 above.
2. Click on Tunnel interface instead of Interface information
3. Under “Name”, give a unique name for your tunnel e.g. Tunnel A.
4. Under “Address” configure an IP address but different from the IP address configure in IP Routing. For example 192.1.1.2. Also choose the subnet mask from the drop-down menu.
5. Under “Tunnel Information” click edit and a new window opens.
6. Under “Tunnel source”, type the address that was configured in the IP routing interface i.e. 192.0.1.2
7. Under “Tunnel destination”, type the address that will match the IP address configured in BS IP interface.
8. Under “Tunnel Mode”, choose GRE.

BS configuration
1. Repeat the same steps as above for IP routing and GRE tunnel configuration and make sure the addresses are in the same subnet. For example, the ASN-GW IP addresses are 192.0.1.2 (IP interface) and 192.1.1.2 (Tunnel interface). For the BS, the addresses would be 192.0.1.3 (IP interface) and 192.1.1.3 (Tunnel Interface).
2. Under “Tunnel Source”, type the address that was configured in the IP routing interface i.e. 192.0.1.3
3. Under “Tunnel destination”, type the address that will match the IP address configured in ASN-GW IP interface.
4. Note: Click on “BS Parameters” menu and type the “ASN Gateway IP address” which in this case will be 192.0.1.2.

You do not need to configure Route Maps and in order to check that your GRE tunnels are correctly configured, you have to click on Network Visualiser.

Friday, September 18, 2009

XConnect partners with GSMA on IP routing options

XConnect, a specialist in neutral Voice over IP (VoIP) and Next Generation Network (NGN) interconnection and registry services, today announced it has entered into an agreement with the GSM Association (GSMA) to enable interoperability between XConnect's ENUM Registry Services and the GSMA's PathFinder number translation service operated by Neustar.

Claimed to be an industry first, the agreement allows XConnect customers to query PathFinder for call routing information and also to allow their numbers to be published to the GSMA Carrier ENUM community. XConnect's new SuperQuery feature is said to be completely transparent and seamless for end users.

"Our agreement with the GSMA regarding its PathFinder service initiative reflects our strategy and the widely anticipated trend toward registry interoperability - and through that, universal routing of IP-based services," said Eli Katz, CEO and founder of XConnect. "With one query to the XConnect registry, service providers now have the opportunity to launch and provision rich voice and data services between fixed and mobile networks."

"The GSMA is delighted to be working with XConnect to provide customers with the ability to enable PathFinder queries via XConnect's Carrier ENUM registry," said Adrian Dodd, chief engineer at the GSMA. "This is another step forward in unlocking the revenue potential of IP-based networks by enabling them to deliver new rich and convergent services to businesses and consumers quickly and efficiently, regardless of their network or device type."

By enabling PathFinder queries via XConnect, service providers will simplify the querying, routing and interoperability process and eliminate the need for separate, complex commercial and technical agreements.

XConnect's SuperQuery feature is available as an option to XConnect customers through existing federation-based interconnection services - Global Alliance and National Federations, including the market-leading federations in the Netherlands and South Korea.

XConnect enables service providers to simplify the interconnect process and enable the deployment of revenue-generating, IP-based multimedia services across networks by more efficiently routing calls to their subscribers via ENUM registry queries. This allows service providers to leverage their number assets as they migrate away from expensive legacy SS7/C7 routing. The agreement with the GSMA extends the number of IP endpoints that can be reached by customers of both partners.

Launched in November 2008 by the GSMA and operated by Neustar, the GSMA's PathFinder service initiative is based on Carrier ENUM technology and acts as a central directory where operators can share mobile and fixed-line addresses to enable accurate and cost-effective routing of packet voice, instant messaging, multimedia services, email and video. The PathFinder service is available to mobile and fixed service providers as well as the full value chain including carriers, IPXs, hubs, ISPs, content providers and application providers. In addition, the GSMA's PathFinder initiative also encompasses an Industry Partner Programme, which is designed to ensure that next-generation infrastructure vendors around the globe have an industry vehicle with which to verify interoperability with ENUM-based routing.

Wednesday, September 16, 2009

Communications software solutions provider

Trillium Digital Systems is the leading provider of communications software solutions for the converged network infrastructure. Trillium's source code solutions are used in more than 500 projects by industry-leading suppliers of wireless, Internet, broadband and telephony products.

Trillium's high-performance, high-availability software and services reduce the time, risk and cost of implementing SS7, IP, H.323, MGCP, ATM, Wireless and other standards-based communications protocols.
Trillium actively participates in the development of 3rd generation systems by developing standards-based wireless communications protocols. It is likely that the first 3G mobile terminals will be multi-mode devices, which means that they will support a number of 2nd generation protocol standards in order to reach wide network coverage and to provide 3rd generation advanced services. Trillium has extensive know-how in all the major communications protocol standards in the world and can provide solutions for many types of networks.

Trillium designs all its portable software products using the Trillium Advanced Portability Architecture (TAPAä), a set of architectural and coding standards that ensure the software is completely independent of the compiler, processor, operating system and architecture of the target system. This makes Trillium products portable, consistent, reliable, high quality, high performance, flexible, and scaleable. This architecture also ensures that all Trillium protocols can interwork seamlessly in the same or between different networks.
As mentioned above, successful implementation, adoption, and overall acceptance of the 3G wireless networks depends largely on the ability of these new mobile networks to interface and interwork with the existing 2G and legacy networks currently deployed worldwide. Trillium offers a broad range of protocols for first- and second-generation mobile networks, legacy networks, and fixed networks. Trillium's products allow wireless communications equipment manufacturers to develop "best-in-class" next-generation mobile networks, to ensure success of the network operator and service provider, and to ensure wide acceptance of the new services by end-users.
Additional information is available at http://www.trillium.com.

Monday, August 24, 2009

Next Generation Network

A Next Generation Network (NGN) is a packet-based network able to provide services including Telecommunication Services and able to make use of multiple broadband, QoS-enabled transport technologies and in which service-related functions are independent from underlying transport-related technologies. It offers unrestricted access by users to different service providers. It supports generalized mobility which will allow consistent and ubiquitous provision of services to users.

The NGN is characterized by the following fundamental aspects:
  • Packet-based transfer
  • Separation of control functions among bearer capabilities, call/session, and application/ service
  • Decoupling of service provision from network, and provision of open interfaces
  • Support for a wide range of services, applications and mechanisms based on service building blocks (including real time/ streaming/ non-real time services and multi-media)
  • Broadband capabilities with end-to-end QoS and transparency
  • Interworking with legacy networks via open interfaces
  • Generalized mobility
  • Unrestricted access by users to different service providers
  • A variety of identification schemes which can be resolved to IP addresses for the purposes of routing in IP networks
  • Unified service characteristics for the same service as perceived by the user
  • Converged services between Fixed/Mobile
  • Independence of service-related functions from underlying transport technologies
  • Compliant with all Regulatory requirements, for example concerning emergency communications and security/privacy, etc.

Saturday, August 22, 2009

A Functional Model for Data Management

ABSTRACT:
This white paper discusses how data management exists to support business objectives. This means business drivers are used to form the data management strategy and tightly link it to corporate goals-be they profit, revenue, customer satisfaction or another goal. Data management is about managing information assets across the entire enterprise. It involves fostering, creating, and maintaining practices that allow the business to optimize data usage regardless of where the data resides and what functional entity needs it.

With this in mind, this white paper will present a data management functional model that describes the data capabilities that best practice collaboration between business and IT should deliver. The data management functional model aids in the planning and delivery of services to any business or development team that requires specialized data knowledge.

Read this white paper to learn more about the data management functional model and the benefits and can provide for your enterprise today.

Tuesday, August 11, 2009

Nokia 5130 Xpress Music

Nokia 5130 XpressMusic was designed for music lovers with an advanced music player, integrated camera and a host of music sharing options.

Key Features:
Display: 2.0″ QVGA (320×240 pixels) LCD, 256k colors
Video: VGA recording and playback
Music: .mp3 AAC, eAAC, eAAC+, MP3, Midi, WMA, WAMR, MXMF
Imaging: 2 mega pixel camera with 4x digital zoom
Memory: 30MB Internal plus 1GB microSD included
Connectivity: MicroUSB, Bluetooth 2.0
Operating System: Symbian OS Series 40

Sunday, August 9, 2009

DDOS attackers continue hitting Twitter, Facebook, GoogleDDOS attackers continue hitting Twitter, Facebook, Google

The distributed denial-of-service (DDOS) attacks that knocked out Twitter for hours and affected other sites like Facebook, Google's Blogger and LiveJournal on Thursday continued all day Friday and may persist throughout the weekend.

In its latest update, posted to a discussion forum of its third-party developers at 11 p.m. U.S. Eastern Time on Friday, Twitter it's still fighting the attacks.

"The DDoS attack is still ongoing, and the intensity has not decreased at all," wrote Chad Etzel, from Twitter's application development platform support team.

This means that Twitter will maintain a set of defensive measures that have allowed it to keep the site up but that also have affected the interaction of third-party applications with the site via its API (application programming interface). "At this point, removing any of those defenses is not an option," Etzel wrote.

The real bad news for developers of affected Twitter applications and for their users? Twitter has no idea when it will be able to switch its application platform back to normal. "There is no ETA on fixing any of this," Etzel wrote, adding that Twitter staff plans to work around the clock this weekend to deal with the DDOS attack.

"Things will continue to be rocky as long as this attack continues. They may get worse, they may get better. That should not be read as 'we don't care about fixing it' or 'we're not going to fix it until everything blows over' but instead as 'we can't promise when things will be back to normal, but in the meantime we are working on fixing ASAP,'" Etzel wrote.

As was the case on Thursday, Twitter wasn't the sole target of the DDOS attacks on Friday. Google's Blogger blog publishing service felt the sting of the attacks on Friday afternoon as well. "A small percentage of Blogger users have experienced error messages this afternoon as the result of what appears to be an ongoing distributed denial of service attack aimed at multiple services across the web," a Google spokesman said via e-mail.

"Google has a variety of systems in place to help counteract these types of attacks, and we believe the majority of affected users can now access their blogs. We're continuing to work to minimize the impact to affected Blogger users. No other Google products have been affected," the spokesman said on Friday afternoon.

Facebook, whose site experienced some performance problems on Thursday due to the attacks, acknowledged on Friday afternoon that the attacks had continued. "The requests from the botnet continue but we have been able to isolate them

and provide normal levels of service to our legitimate users," a Facebook spokesman said via e-mail on Friday afternoon.

According to news reports and information from companies affected, the attacks appear directed at silencing a blogger in the country of Georgia who has been critical of Russia's actions and policies toward that neighboring country.

Saturday, August 8, 2009

Booting from USB drive even if it is not supported by BIOS

These days the use of USB disks are increasing day by day.And in its final step now days even installation windows or booting of a system is also performed by using USB disks. But as far this is limited to only motherboard which support booting from USB disks by the system BIOS. Now what to do with non-supporting motherboards? If you are stuck in such a situation where you have a mother board which does not support booting from a USB drive then free boot manager PLoP will be the best solution for you. This simple boot manager can work with your existing bootloader in BIOS as this can be launched from any device i.e floppy or a CD drive. And once launched you can use menus provided to boot your PC with almost any device like USB, CD or even from a Network.

You can download it from the link here in a zip format and follow the instruction included in the archive. Hope this will solve your problem…..

VPN (virtual private network)

A VPN (virtual private network) allows a host (your computer) to communicate over an untrusted network (the Internet) in a secure environment (the VPN). Consider a tunnel that runs through a mountain. The tunnel is pretty safe, but anyone can use it. However, we want a private road that no one else can use. So, we build another tunnel inside the existing tunnel, taking up one of the lanes on the existing tunnel highway (a tunnel inside of a tunnel). The extra tunnel can be likened to a VPN.

Of course, VPNs are done using math and electricity, not cement and roads. For example, Microsoft provides a free VPN client for all of its Windows operating systems. Your network admin could install it on your computer. Then, that same administrator enables VPN capability on the network she manages so that when you remotely connect to the network, you must use a VPN client to connect to the network.

Cisco, and other vendors, sell VPN clients. Cisco’s is not free. They charge over $5,000 for each VPN client you want to install! Yikes. Many people pay the fee though, because Cisco’s product offers robust security.

You use the VPN client your network admin installed on your system by first clicking its icon to start it. After that, you get on the Internet and connect to your company’s IP address (the IP address you have to use to connect to the network). Next, log in to the network while you are safely tucked inside your VPN connection. No one on the Internet can touch your traffic when you’re working inside a VPN. A hacker might see your traffic, but it can’t be understood.

Various popular standards for compressing multimedia data

JPEG
JPEG stands for Joint Photographic Experts Group. However, what people usually mean when they use the term "JPEG" is the image compression standard they developed. JPEG was developed to compress still images, such as photographs, a single video frame, something scanned into the computer, and so forth. You can run JPEG at any speed that the application requires. For a still picture database, the algorithm doesn't have to be very fast. If you run JPEG fast enough, you can compress motion video -- which means that JPEG would have to run at 50 or 60 fields per second. This is called motion JPEG or M-JPEG. You might want to do this if you were designing a video editing system. Now, M-JPEG running at 60 fields per second is not as efficient as MPEG 2 running at 60 fields per second because MPEG was designed to take advantage of certain aspects of motion video.

Motion JPEG

JPEG compression or decompression that is applied real-time to video. Each field or frame of video is individually processed.

MPEG

MPEG stands for Moving Picture Experts Group. This is an ISO/IEC (International Standards Organization) body that is developing various compression algorithms. MPEG differs from JPEG in that MPEG takes advantage of the redundancy on a frame-to-frame basis of a motion video sequence, whereas JPEG does not.

MPEG 1

MPEG 1 was the first MPEG standard defining the compression format for real-time audio and video. The video resolution is typically 352 x 240 or 352 x 288, although higher resolutions are supported. The maximum bitrate is about 1.5 Mbps. MPEG 1 is used for the Video CD format.

MPEG 2

MPEG 2 extends the MPEG 1 standard to cover a wider range of applications. Higher video resolutions are supported to allow for HDTV applications, both progressive and interlaced video are supported. MPEG 2 is used for the DVD - Video and SVCD formats, and also forms the basis for digital SDTV and HDTV.

MPEG 3

MPEG 3 was originally targeted for HDTV applications. This was incorporated into MPEG 2, so there is no MPEG 3 standard.

MPEG 4

MPEG 4 uses an object-based approach, where scenes are modeled as compositions of objects, both natural and synthetic, with which the user may interact. Visual objects in a scene are described mathematically and given a position in a two- or three-dimensional space. Similarly, audio objects are placed in a sound space. Thus, the video or audio object need only be defined once; the viewer can change his viewing position, and the calculations to update the audio and video are done locally. Classical "rectangular" video, as from a camera, is one of the visual objects defined in the standard. In addition, there is the ability to map images onto computer-generated shapes, and a text-to-speech interface.

MPEG 7
MPEG 7 standardizes the description of multimedia material (referred to as metadata), such as still pictures, audio, and video, regardless if locally stored, in a remote database, or broadcast. Examples are finding a scene in a movie, finding a song in a database, or selecting a broadcast channel. The searcher for an image can use a sketch or a general description. Music can be found using a "query by humming" format.

Thursday, August 6, 2009

Welcome Back Cox Communications!

Customers of Cox Communications have been unable to browse The Tech FAQ for the last several weeks due to a technical misunderstanding inside Cox’s Internet abuse department.

Cox claimed that an attack on their webmail servers originated from our .109 IP address. This is quite an amusing claim, as no traffic is allowed to originate from our .109 IP address. That IP address is reserved only for inbound traffic.

Cox never contact us and when we were finally able to reach an engineer at Cox we were told that they had already deleted all logs of the supposed incident.

But, after finally getting past armies of first level “support” personnel, the engineer I finally spoke with was very nice. He removed the block on our network and Cox customers are once again able to browse our extensive library of technical information.

Welcome back Cox Communications!

EASEUS Partition Manager

Partitioning the hard drive is not a task which would be performed on a daily basis. It is commonly executed on procurement of a new hard drive. Some consider it a very risky and intricate process but with EASEUS Partition Manager, your outlook will change.

EASEUS Partition Manager (EASEUS PM) is a disk partition management tool, also an alternative to Partition Magic (currently known as Norton Partition Magic). EASEUS PM Software is available in four different editions; Home, Professional, Server and Unlimited. It’s a handy software designed exclusively to aid you with dicing a hard drive to your requirements and taste.

Features and Functions

  • Offers a simple and speedy solution in configuring and administrating the hard disk partitions with a comprehensive control over the creation, deletion, resizing, moving and formatting of partitions
  • Supports browsing the detailed information about all hard drives, partitions and file systems too
  • Labels can be assigned to each partition and supports hiding or un-hiding of partitions
  • Comes supplied with an elite element to resize or move the live partitions by making use of the free space without any loss of data
  • Helps shrink the size of an existing partition in order to unite it with the available free space and construct a larger partition
  • Gives a detailed and comprehensive set of instructions to aid the user in accomplishing a variety of projected steps
  • Well-suited with Win NT/2000/XP/2003/Vista and can serve large disks ranging from 2GB to 1TB
  • Incorporates a preference to create a bootable CD, that permits to supervise the partitions before Windows loads (could be needed for certain changes)
  • Executes a bad sector test and a call on Windows’ CHDISK utility to mend the defects
  • Secures and protects the settings, preventing them from unauthorized changes, by providing password locking capability

The Disadvantages

  • The interface and the visuals are not too appealing or attractive, starting from the colors to the fonts used. Looking plainly at its aesthetics (appearance), it equals with the aged look of the previous operating systems
  • Lacks any kind of backup preference for cloning the existing data

The Conclusion

The user interface may not be worthy of a reward but it balances for the wide-ranged powerful functionalities and ease of use. So grab a copy of EASEUS Partition Manager to make things simple and reclaim peace of mind.

Visit EASEUS Partition Manager’s website

New Cell Phones

These are the latest and greatest mobile handsets from manufacturers like Nokia, Motorola, and Palm.

Check out these great new cell phones with features like Bluetooth, EVDO, GPS, microSD card slots, and instant messaging.

Seven Things IT Professionals Must Know

Our friends at TradePub have teamed with eEye Digital Security to bring you a free eBook titled Seven Things IT Professionals Must Know.

This free eBook is designed to help you gain key insights into IT security problems and find the safest means to protect your technological assets.

Seven Things IT Professionals Must Know details the seven pain points often encountered by IT security professionals and gives practical advice on how to solve them. This eBook is useful for any IT Professional dealing with internal and external attacks.

The eBook, unsurprisingly, covers eEye products and how they can be useful in protecting enterprise resources, network assets, web sites, and applications. However, the information provided can be useful no matter what IT security tools you choose to use.

The book covers core IT security topics such as:

  • How to prevent the loss of protected information
  • How to resolve network weaknesses
  • How to resist system exploitation through vulnerable network ports
  • How to protect against harmful spyware attacks
  • How to defend against unwanted intruders

It’s a good read and you can’t beat the price. :)