Tuesday, November 9, 2010

Lazy males made to work


The best known insect societies are those of ants, bees and wasps all of which belong to the order Hymenoptera. Individuals in these species organize themselves into colonies consisting of tens to millions of individuals. Each colony is headed by one or a small numbers of fertile queens while the rest of the individuals serve as sterile or nearly sterile workers. The spectacular ecological success of the social insects, their caste differentiation, division of labour and highly developed communication systems are well known. A less studied but equally intriguing aspect of these hymenopteran societies is that they are feminine monarchies – there are queens but no kings and all workers are females. Males do little more than transferring their sperm to virgin queens while all the work involved in nest building, brood care and, finding and processing food is done by the females.

Why don’t males work, at least during the period that they stay on the nests of their birth? Using the Indian primitively eusocial wasp Ropalidia marginata and the important task of feeding larvae as an example of work, we have recently made a novel attempt to understand the secret behind the well-known laziness of the males. We considered three hypotheses:
males are incapable of feeding larvae,
males never get access to enough food to satisfy themselves and have something left over to offer to the larvae (males do not forage on their own and depend on the females for access to food), and
females are so much more efficient at feeding larvae that they leave no opportunities for the relatively inefficient males to do so.

To test these hypotheses, my graduate student Ms. Ruchira Sen offered experimental colonies excess food. This resulted in a marginal amount of feeding of the larvae by males thus disproving the hypothesis that males are incapable of feeding larvae. Then she removed all the females from some colonies and left the males alone with hungry larvae. This experiment was a non-starter because males cannot forage and find food in the absence of females. Ruchira overcame this problem by mastering the art of tenderly and patiently hand-feeding the males. And she gave them more food than they could themselves consume so that they might feed larvae if they could. Her efforts were rewarded when males under these conditions fed larvae at rates nearly comparable to those of the females. Thus males can feed larvae and will do so if they are given an opportunity. It therefore appears that males do not feed larvae under natural circumstances because they do not have access to enough food and/or because females leave them few opportunities to do so. There are several lines of evidence to suggest that the males were not merely dumping unwanted food but that they were actively seeking out the most appropriate larvae and feeding them “deliberately”. But it must be emphasized that from the point of view of the larvae, males were quite inefficient compared to the females. Apart from the fact that males fed only the oldest larvae and ignored all the young larvae, it turned out that many of the larvae under allmale care died.

In addition to their obvious interest, these studies open up a major evolutionary puzzle: why has natural selection not made the males more efficient and made feeding larvae by males a routine matter? Answering one question raises at least one more – and that’s how it should be.

Hard rocks can have long memories


One of the best ways to understand the geological history of our 4500 million year old planet is to study rocks formed under a wide variety of geological conditions. Geologists, equipped with their vast experience and advanced analytical instruments, can identify and interrogate those rocks that best preserve evidence of past geological events. One such instrument is the sensitive high resolution ion microprobe (SHRIMP), a large specialized mass spectrometer that measures the ages of rocks, their precursors and major thermal events by firing a 10,000 volt ion beam at crystals as small as 0.05 mm diameter and measuring the isotopic abundances of the lead, uranium and thorium that are released.

The reconstruction of the continents that existed in the past is an important part of understanding the dynamic evolution of earth. The ancient supercontinent of Gondwana once consisted of what are now the smaller continents of South America, Africa, Madagascar, southern India, Sri Lanka, Antarctica and Australia. Determining the timing of the geological events involving rock formation and modification (deformation, metamorphism etc.) in these continental fragments is vital in piecing together the evolution of the earth's crust during any period of geological time. Most rocks 'forget' their history if exposed to extreme geological conditions, but there are some rare cases where particular rocks derived from the earth's lower crust have preserved, in their distinctive mineralogy, convincing evidence of the very high temperatures that can be present at depth.
The rocks of the central Highland Complex in Sri Lanka, and some parts of Antarctica and southern India, have been subject to some of the highest peak temperatures of crustal metamorphism known, over 1100°C. At such temperatures most rocks would turn into molten magma, but in the November issue of Geology, Sajeev and others report rocks from near Kandy (Sri Lanka) that not only survived the high temperatures, but contain crystals of zircon in which a uranium-lead isotopic record of their provenance and thermal history have survived. Such survival is contrary to all predictions from experimental studies of the rate that lead should be lost from zircon by thermal diffusion.

From a study of the metamorphic minerals and thermodynamic modelling, and SHRIMP uranium-lead isotopic analyses of zircon and monazite (cerium phosphate), the authors have shown that the rocks near Kandy were originally sediments derived from sources ranging in age from 2500 to 830 million years. The sediments were heated to over 1100°C at a depth of about 25 km about 570 million years ago, and then rapidly lifted towards the surface, while still hot, about 550 million years ago. These Sri Lankan rocks were probably trapped and buried in the violent collision between the two halves of the Gondwana supercontinent about 600 million years ago, superheated by basalt magmas rising from the earth's interior, then forced to the near surface again as the tectonic pressures relaxed. The preservation of the isotopic record of these events is remarkable, and still remains to be fully explained.

Wednesday, October 6, 2010

The 15 Most Promising Inventions of 2010
















15. nPower Personal Energy Generator

14. Flying Car: Terrafugia

13. Sony 3D-360 Hologram

12. Xeros Waterless Washing Machine

11. Recompute: The Cardboard Computer

10. Powermat Wireless Battery Charger

09. Samsung Water-Powered Battery

08. 2010 Brabus Mercedes-Benz Viano Lounge

07. V12 Dual-Touchscreen Notebook

06. MyKey by Ford

05. Tri-Specs

04. Google Wave

03. The KS810 Keyboard Scan

02. Apple Tablet

01. Software that Captures Sports Games Robotically

Tuesday, September 7, 2010

Human Teleportation-2




Ever since the wheel was invented more than 5,000 years ago, people have been inventing new ways to travel faster from one point to another. The chariot, bicycle, automobile, airplane and rocket have all been invented to decrease the amount of time we spend getting to our desired destinations. Yet each of these forms of transportation share the same flaw: They require us to cross a physical distance, which can take anywhere from minutes to many hours depending on the starting and ending points.

But what if there were a way to get you from your home to the supermarket without having to use your car, or from your backyard to the International Space Station without having to board a spacecraft? There are scientists working right now on such a method of travel, combining properties of telecommunications and transportation to achieve a system called teleportation. In this article, you will learn about experiments that have actually achieved teleportation with photons, and how we might be able to use teleportation to travel anywhere, at anytime.

Teleportation involves dematerializing an object at one point, and sending the details of that object's precise atomic configuration to another location, where it will be reconstructed. What this means is that time and space could be eliminated from travel -- we could be transported to any location instantly, without actually crossing a physical distance.

Many of us were introduced to the idea of teleportation, and other futuristic technologies, by the short-lived Star Trek television series (1966-69) based on tales written by Gene Roddenberry. Viewers watched in amazement as Captain Kirk, Spock, Dr. McCoy and others beamed down to the planets they encountered on their journeys through the universe.

In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. This revelation, first announced by Bennett at an annual meeting of the American Physical Society in March 1993, was followed by a report on his findings in the March 29, 1993 issue of Physical Review Letters. Since that time, experiments using photons have proven that quantum teleportation is in fact possible

Human Teleportation-1




We are years away from the development of a teleportation machine like the transporter room on Star Trek's Enterprise spaceship. The laws of physics may even make it impossible to create a transporter that enables a person to be sent instantaneously to another location, which would require travel at the speed of light.

For a person to be transported, a machine would have to be built that can pinpoint and analyze all of the 1028 atoms that make up the human body. That's more than a trillion trillion atoms. This machine would then have to send this information to another location, where the person's body would be reconstructed with exact precision. Molecules couldn't be even a millimeter out of place, lest the person arrive with some severe neurological or physiological defect.


In the Star Trek episodes, and the spin-off series that followed it, teleportation was performed by a machine called a transporter. This was basically a platform that the characters stood on, while Scotty adjusted switches on the transporter room control boards. The transporter machine then locked onto each atom of each person on the platform, and used a transporter carrier wave to transmit those molecules to wherever the crew wanted to go. Viewers watching at home witnessed Captain Kirk and his crew dissolving into a shiny glitter before disappearing, rematerializing instantly on some distant planet.

If such a machine were possible, it's unlikely that the person being transported would actually be "transported." It would work more like a fax machine -- a duplicate of the person would be made at the receiving end, but with much greater precision than a fax machine. But what would happen to the original? One theory suggests that teleportation would combine genetic cloning with digitization.

In this biodigital cloning, tele-travelers would have to die, in a sense. Their original mind and body would no longer exist. Instead, their atomic structure would be recreated in another location, and digitization would recreate the travelers' memories, emotions, hopes and dreams. So the travelers would still exist, but they would do so in a new body, of the same atomic structure as the original body, programmed with the same information.

But like all technologies, scientists are sure to continue to improve upon the ideas of teleportation, to the point that we may one day be able to avoid such harsh methods. One day, one of your descendents could finish up a work day at a space office above some far away planet in a galaxy many light years from Earth, tell his or her wristwatch that it's time to beam home for dinner on planet X below and sit down at the dinner table as soon as the words leave his mouth.

Thursday, August 26, 2010

Thermal Design Power (TDP)




To understand power management, it's important to fully appreciate the ways designers deal with average and peak power. Most of this article will focus on how average power can be reduced, but there are also some interesting power management techniques to handle the case of peak power consumption. TDP is a measure of how much power needs to be dissipated by the cooling solution when the CPU is running the maximum software workload that would be expected in normal operating conditions. (With specialized test code, a CPU could generate even more heat.)
More specifically, the CPU manufacturers calculate TDP as the amount of heat that needs to be transferred from the processor die in order to keep the transistor junction temperature (Tj) below the maximum for which the device is guaranteed to operate (Tj is usually 100 degrees centigrade or lower, but note that things are actually much more complicated. Some vendors will often specify a die "case" temperature as low as 70 degrees centigrade in order to get high clock rates. That's why some desktop heat sinks are enormous.)

How that heat gets removed is part of the system thermal design and can be accomplished by heat sinks, fans, and air vents. In a mobile device, a large portion of the heat is conductively transferred through the system chassis—and then onto your lap, highlighting one of the limitations with using a CPU that has a high TDP value. Many laptops use CPUs with a TDP of 30 or more watts. These are easily identified by the fans in the case and the short amount of time you'd actually want the machine on your lap. Note that multicore CPUs make this problem even worse, since the TDP and cooling solutions are based on all cores running simultaneously

CPU's Protect Themselves from Killer Heat

Before the CPU die can exceed the maximum junction temperature, on-chip thermal sensors signal special circuitry to lower the temperature. Over the years, CPUs have incorporated several mechanisms for measuring and controlling temperature. An on-chip thermal diode allows an external analog-to-digital converter to monitor temperature. Basically, the diode current changes as the chip heats up, allowing the system microcontroller to measure the voltage difference and take action to lower the temperature.
The system vendors program the microcontroller with temperature control algorithms to speed up fans, throttle the CPU, etc. In some designs, the CPU will run its own BIOS code to control temperature. However, CPU designers were worried about chip damage if the external microcontroller were to fail. Also, some of the thermal spikes happen so rapidly that it was possible to exceed maximum die temperature before the system could respond. Additional on-chip temperature sensors have been added, directly controlling digital logic that automatically reduces CPU performance and temperature. If for some reason the CPU temperature keeps rising, eventually it reaches a critical condition, and hardware signals the power supply to shut down completely.

Sometimes you'll see references to Thermal Monitor 1 (TM1) and Thermal Monitor 2 (TM2). These are mechanisms used by the CPU to quickly reduce performance and get an accompanying drop in power consumption. TM1 is an older technology and simply inserts idle cycles to effectively halve the pipeline frequency, even though the clock signal continues to run at the same frequency. This is a dramatic drop in performance for a linear drop in power consumption.

TM2 uses dynamic voltage scaling (DVS) techniques to reduce the clock frequency and then signal the external voltage regulator to shift to a lower voltage. The power supply voltage won't drop instantaneously because of capacitance. However, voltage reduction has the biggest impact on temperature, since power varies by the square of voltage. We'll talk more about dynamic voltage scaling, since it is a key power management technique that helps reduce average power consumption. There are differences in the algorithms used by the various CPU vendors for how they throttle clock rate and voltage to keep the die below maximum temperature.

Monday, July 19, 2010

System Cloning Overview



Windows XP Embedded includes the System Cloning Tool component. The system cloning process is used during manufacture to ensure that each device has a run-time image containing a unique computer security ID (SID) and computer name.

If each device undergoes the stand-alone First Boot Agent (FBA) process separately, cloning is not required. However, the stand-alone FBA process is time-consuming and therefore impractical in a typical production environment.

If you simply copied the same post-FBA image to every device, every device would share the same computer SID. This presents a problem because every computer running Windows XP is required to have a unique computer SID. The solution is to include the System Cloning Tool component in your run-time image.

The cloning process consists of the following two phases:

Reseal phase
The reseal phase occurs on the device, which is called the master because the image created on it will be the cloned image. Typically, the reseal phase occurs just before the reboot that precedes the cloning phase; however, additional operations can occur between the reseal phase and the device reboot. After the reseal phase has completed, you must immediately shut off the device before the subsequent reboot would typically occur. At this time, the on-disk image is ready for cloning. For more information, see Reseal Phase.

Cloning phase
The cloning phase automatically begins the first time the image boots after the reseal phase, unless you set the extended property cmiResealPhase to 0 in Target Designer. Typically, this occurs after the on-disk image from the master has been copied to another device, or the clone. The clone device picks up where the master device has left off after the reseal phase. During the cloning phase, the computer SID from the master device is replaced with a unique computer SID everywhere the SID appears. This makes each clone unique where it is required but identical to the master everywhere else. The following illustration shows an overview of the cloning process.



During the cloning phase, you see a message in the Windows XP boot monitor stating that Windows is starting. This message notifies you that the cloning process is working. The amount of time spent in this phase depends on the size of the image and whether it is a FAT or NTFS file system. An image on an NTFS file system partition will take longer to clone because the NTFS file system uses SIDs to control access to each file system object using access control lists (ACLs).

Thursday, July 8, 2010

'Quantum Computer' a Stage Closer With Silicon Breakthrough


ScienceDaily (June 23, 2010) — The remarkable ability of an electron to exist in two places at once has been controlled in the most common electronic material -- silicon -- for the first time. The research findings -- published in Nature by a UK-Dutch team from the University of Surrey, UCL (University College) London, Heriot-Watt University in Edinburgh, and the FOM Institute for Plasma Physics near Utrecht -- marks a significant step towards the making of an affordable "quantum computer."

According to the research paper, the scientists have created a simple version of Schrodinger's cat -- which is paradoxically simultaneously both dead and alive -- in the cheap and simple material out of which ordinary computer chips are made.
"This is a real breakthrough for modern electronics and has huge potential for the future," explained Professor Ben Murdin, Photonics Group Leader at the University of Surrey. "Lasers have had an ever increasing impact on technology, especially for the transmission of processed information between computers, and this development illustrates their potential power for processing information inside the computer itself. In our case we used a far-infrared, very short, high intensity pulse from the Dutch FELIX laser to put an electron orbiting within silicon into two states at once -- a so-called quantum superposition state. We then demonstrated that the superposition state could be controlled so that the electrons emit a burst of light at a well-defined time after the superposition was created. The burst of light is called a photon echo; and its observation proved we have full control over the quantum state of the atoms."

And the development of a silicon based "quantum computer" may be only just over the horizon. "Quantum computers can solve some problems much more efficiently than conventional computers -- and they will be particularly useful for security because they can quickly crack existing codes and create un-crackable codes," Professor Murdin continued. "The next generation of devices must make use of these superpositions to do quantum computations. Crucially our work shows that some of the quantum engineering already demonstrated by atomic physicists in very sophisticated instruments called cold atom traps, can be implemented in the type of silicon chip used in making the much more common transistor."

Professor Gabriel Aeppli, Director of the London Centre for Nanotechnology added that the findings were highly significant to academia and business alike. "Next to iron and ice, silicon is the most important inorganic crystalline solid because of our tremendous ability to control electrical conduction via chemical and electrical means," he explained. "Our work adds control of quantum superpositions to the silicon toolbox."

Wednesday, July 7, 2010

OverClocking?...




Overclocking is the somewhat unknown and uncommon practice of running your CPU (or other parts) past the speed that it is rated at. An example is running a 1.2 GHz CPU at 1.4 GHz or a 200 MHz CPU at 233 MHz. How can this be achieved? The following description isn't exact, but it captures the basic idea. Most CPU companies create their CPUs and then test them at a certain speed. If the CPU fails at a certain speed, then it is sold as a CPU at the next lower speed. The tests are usually very stringent so a CPU may be able to run at the higher speed quite reliably. In fact, the tests are often not used at all. For example, once a company has been producing a certain CPU for awhile, they have gotten the process down well enough that all the CPUs they make will run reliably at the highest speed the CPU is designed for. Thus, just to fill the demand, they will mark some of them as the slower CPUs.
Beware, however, that some vendors may sell CPUs already overclocked. This is why it is very important to buy from a dealer you can trust.
Some video cards are also very overclockable with some companies selling their cards already overclocked (and advertised this way). The Programs like Powerstrip can often be used to easily overclock the cards.
Also, if you're afraid to overclock your CPU, let another company do it for you! Companies like ComputerNerd sell CPUs pretested at overclocked speeds.

What To Consider:

Do you NEED to overclock? It may not be worth the risk if your computer is running fine as it is. However, if it seems a little too slow and/or you're a speed freak, it may be worth the risk.
How important is your work? If you're running a very important network server, it may not be worth it to put the extra strain on the computer. Likewise, if your computer does a lot of highly CPU intensive operations, you may also want to not overclock. Obviously the most stable computer is going to be one that is not overclocked. This is not to say that an overclocked computer can not be 100% stable because they CAN. If you just use your computer to play games and would like to have a little faster frame rates, then overlcocking may be worth it.
Potential Side-Effects?

The first impression people usually have of overclocking is "isn't that dangerous?" For the most part, the answer is no. If all you do to try to overclock your computer is change the CPU's speed, there is very little chance that you will damage your computer and/or the CPU as long as you do not push your computer too hard (i.e. trying to run a 500 MHz CPU at 1 GHz. Damage has happened, but it's a rare thing. Also, if you start increasing voltage settings to allow your CPU to run at a higher speed, there is more of a risk there.
The best way to prevent damage is to keep your CPU as cool as possible. The only way you can really damage your CPU is if it gets too hot. Adequate cooling is one of the keys to successful overclocking. Using large heatsinks with powerful ball-bearing fans will help to achieve this. How hot is too hot? If you can't keep your finger on the CPU's heatsink comfortably, then it is probably too hot and you should lower the CPU's speed.
Changing the bus speed is actually more beneficial than changing the CPU's speed. The bus speed is basically the speed at which the CPU communicates with the rest of the computer. When you increase the bus speed, in many cases you will be overclocking all the parts in your AGP, PCI slots, and your RAM as well as the CPU. Usually this is by a small margin and won't hurt these components. Pay attention to them though. If they're getting too hot, you may need to add extra cooling for them (an additional fan in your case). Just like your CPU, if they get too hot, they may be damaged as well.
Difficulty Level:

Believe it or not, it's actually quite simple. In many cases all you have to do is change a couple of jumpers on the motherboard or change settings in your motherboard's BIOS.
Recommendations:

Most of today's CPUs are multiplier locked, but you can change the bus speed. As an example, you could run a 1.2 GHz Thunderbird that normally runs at 133 bus (also called 266 because it is "double-pumped) at:
Multiplier * Bus Speed = CPU speed in MHz
9 * 133 = 1,200 MHz = 1.2 GHz = default
9 * 140 = 1,260 MHz = 1.26 GHz
9 * 145 = 1,305 MHz = 1.3 GHz
9 * 150 = 1,350 MHz = 1.35 GHz
Even though that CPU is multiplier locked, you can change the multiplier by connecting the "L1" dots on the CPU itself with a normal pencil (it's just enough to conduct electricity to allow you to change the multipliers). If you do this properly, it is perfectly safe. Here's an article on how to do this.

9 * 133 = 1,200 MHz = 1.2 GHz = default
9.5 * 133 = 1,264 MHz = 1.264 GHz
10 * 133 = 1,333 MHz = 1.333 GHz
Or change both together, like this:
10 * 140 = 1,400 MHz = 1.4 GHz
All you need to do here is use common sense really. For example, you wouldn't want to try to run a 233 MHz CPU at 400 MHz. For one thing, it won't work. For another, that probably would damage your CPU. I would advise starting out low and slowly trying to go higher. If you have a 233 MHz CPU, try running it one step higher, then the next step. Most likely you won't be able to get a CPU like this to run much higher than 300, but that is a possibility.
Be more concerned with changing the bus speed than the CPU speed as that will provide the greatest amount of speed improvement. For example, running a CPU at 250 (83.3x3) would be better than 262.5 (75x3.5) in most cases because the bus speed of 83 is higher than 75. The default for most CPUs is at 66 MHz bus speed. The newer P2's bus speed is 100 MHz by default. Many computers will not have options on bus speeds, but if you get any of the motherboards I recommend, you will have different bus speed options. The higher bus speed you can run at reliably, the better. Depending on what your other components are though, they may cause your computer to crash or become unstable if they can't handle the higher bus speeds. With bus speeds like 133, you have to have higher quality PC133 or PC2100 DDR SDRAM to be able to achieve this bus speed reliably.

Click here to find out HOW to overclock:

Saturday, July 3, 2010

The iPhone 4G is COMING!!!

June 7, 2010 - Are you ready?Release likely around June 24, 2010!

Lots of speculation is going around on the new iPhone 4g... in HD!! Official features have been revealed at the WWDC conference today.

Verizon Wireless is currently testing a CDMA version of the iPhone 4G and Verizon confirms they are making network changes to bring the iphone to their network. The new iPhone 4g is going to be loaded with awesome new features like video chat, multi-tasking and extreme downloading. (List of possible features below). Just when you think there is nothing else to come up with, more and more and more technology comes out. And it is on the rise, and not just at Apple, Inc!

Woo hoo! This iPhone 4g could also have dual core processors and higher and powerful graphic chips that can deliver higher video resolutions and better "still" images when taking pictures.

There are a few networks working on building a 4G network. T-mobile would be a likely carrier since they are GSM already. Sprint has a 4G network already... AT&T and Verizon Wireless are in the beginning stages. There are talks of Verizon Wireless getting iPhone sometime in 2010 since the exclusive contract with AT&T expires, but it could be renewed until 2012.

Whether or not it will be 4G will be up to them!... can they build in time? Regardless, there is much anticipation on how many people will leave AT&T for Verizon Wireless because of AT&T's lagging on app restrictions like Slingplayer and Google Voice and Skype (on #g network, not Wi-Fi).

AT&T's restrictions have caused the percentage of people that are JailBreaking their iPhones to rise since Jail Breaking usually comes with Cydia which is the app store for jail broken phones. Most of the applications, ringtones, and even iphone themes!...are free with Cydia. Winterboard is part of the download, and it very easily add's the changes to your phone so you dont have to figure how to do it on your own...it is VERY automated.

The Palm Pre on Sprint and HTC EVO (Sprint now offering a 4G network) has made an attempt at being competitive with iPhone and Blackberry...and it seems they are making head way.

iPhone 4G looks promising in terms of being sleek, packed with new hardware and multi-tasking software. Very exciting.

A few features of iPhone 4G:

Thinner! With shiny glass back piece - 9.3 mm thick.

Unified Mailbox (all email accounts in one area).

Application folders.

New wallpaper/background options.

A new, sleeker body design.

OLED screen.

Multi-Tasking. (use multiple functions at once without going in and out of apps).

iChat camera (on the front so you can have video chat!!!).

32G (basic) and 64G of memory. You're sure to never run out.

Extended battery life!!!

Hi Definition Camera (5 megapixel) with a backside illuminated sensor AND FLASH!

Hi Definition Camcorder.

Hi Definition audio.

Messaging light.

True GPS built in.


What are some lessons that life teaches you

Past  can not be changed. Opinions  don't define your reality. Everyone's  journey  is different. Judgements  are a confessi...