Thursday, 24 July 2014

Brain Mapping - A new map, a decade in the works, shows structures of the brain in far greater detail than ever before, providing neuro scientists with a guide to its immense complexity.



Neuroscientists have made remarkable progress in recent years toward understanding how the brain works. And in coming years, Europe’s Human Brain Project will attempt to create a computational simulation of the human brain, while the U.S. BRAIN Initiative will try to create a wide-ranging picture of brain activity. These ambitious projects will greatly benefit from a new resource: detailed and comprehensive maps of the brain’s structure and its different regions.
As part of the Human Brain Project, an international team of researchers led by German and Canadian scientists has produced a three-dimensional atlas of the brain that has 50 times the resolution of previous such maps. The atlas, which took a decade to complete, required slicing a brain into thousands of thin sections and digitally stitching them back together with the help of supercomputers. Able to show details as small as 20 micrometers, roughly the size of many human cells, it is a major step forward in understanding the brain’s three-dimensional anatomy.
To guide the brain’s digital reconstruction, researchers led by Katrin Amunts at the Jülich Research Centre in Germany initially used an MRI machine to image the postmortem brain of a 65-year-old woman. The brain was then cut into ultrathin slices. The scientists stained the sections and then imaged them one by one on a flatbed scanner. Alan Evans and his coworkers at the Montreal Neurological Institute organized the 7,404 resulting images into a data set about a terabyte in size. Slicing had bent, ripped, and torn the tissue, so Evans had to correct these defects in the images. He also aligned each one to its original position in the brain. The result is mesmerizing: a brain model that you can swim through, zooming in or out to see the arrangement of cells and tissues.
At the start of the 20th century, a German neuroanatomist named Korbinian Brodmann parceled the human cortex into nearly 50 different areas by looking at the structure and organization of sections of brain under a microscope. “That has been pretty much the reference framework that we’ve used for 100 years,” Evans says. Now he and his coworkers are redoing ­Brodmann’s work as they map the borders between brain regions. The result may show something more like 100 to 200 distinct areas, providing scientists with a far more accurate road map for studying the brain’s different functions.
“We would like to have in the future a reference brain that shows true cellular resolution,” says Amunts—about one or two micrometers, as opposed to 20. That’s a daunting goal, for several reasons. One is computational: Evans says such a map of the brain might contain several petabytes of data, which computers today can’t easily navigate in real time, though he’s optimistic that they will be able to in the future. Another problem is physical: a brain can be sliced only so thin.
Advances could come from new techniques that allow scientists to see the arrangement of cells and nerve fibers inside intact brain tissue at very high resolution. Amunts is developing one such technique, which uses polarized light to reconstruct three-­dimensional structures of nerve fibers in brain tissue. And a technique called Clarity, developed in the lab of Karl Deisseroth, a neuroscientist and bioengineer at Stanford University, allows scientists to directly see the structures of neurons and circuitry in an intact brain. The brain, like any other tissue, is usually opaque because the fats in its cells block light. Clarity melts the lipids away, replacing them with a gel-like substance that leaves other structures intact and visible. Though Clarity can be used on a whole mouse brain, the human brain is too big to be studied fully intact with the existing version of the technology. But Deisseroth says the technique can already be used on blocks of human brain tissue thousands of times larger than a thin brain section, making 3-D reconstruction easier and less error prone. And Evans says that while Clarity and polarized-light imaging currently give fantastic resolution to pieces of brain, “in the future we hope that this can be expanded to include a whole human brain.”

Agricultural Drones

Ryan Kunde is a winemaker whose family’s picture-perfect vineyard nestles in the Sonoma Valley north of San Francisco. But Kunde is not your average farmer. He’s also a drone operator—and he’s not alone. He’s part of the vanguard of farmers who are using what was once military aviation technology to grow better grapes using pictures from the air, part of a broader trend of using sensors and robotics to bring big data to precision agriculture.

What “drones” means to Kunde and the growing number of farmers like him is simply a low-cost aerial camera platform: either miniature fixed-wing airplanes or, more commonly, quadcopters and other multibladed small helicopters. These aircraft are equipped with an autopilot using GPS and a standard point-and-shoot camera controlled by the autopilot; software on the ground can stitch aerial shots into a high-­resolution mosaic map. Whereas a traditional radio-­controlled aircraft needs to be flown by a pilot on the ground, in Kunde’s drone the autopilot (made by my company, 3D Robotics) does all the flying, from auto takeoff to landing. Its software plans the flight path, aiming for maximum coverage of the vineyards, and controls the camera to optimize the images for later analysis.
This low-altitude view (from a few meters above the plants to around 120 meters, which is the regulatory ceiling in the United States for unmanned aircraft operating without special clearance from the Federal Aviation Administration) gives a perspective that farmers have rarely had before. Compared with satellite imagery, it’s much cheaper and offers higher resolution. Because it’s taken under the clouds, it’s unobstructed and available anytime. It’s also much cheaper than crop imaging with a manned aircraft, which can run $1,000 an hour. Farmers can buy the drones outright for less than $1,000 each.
The advent of drones this small, cheap, and easy to use is due largely to remarkable advances in technology: tiny MEMS sensors (accelerometers, gyros, magnetometers, and often pressure sensors), small GPS modules, incredibly powerful processors, and a range of digital radios. All those components are now getting better and cheaper at an unprecedented rate, thanks to their use in smartphones and the extraordinary economies of scale of that industry. At the heart of a drone, the autopilot runs specialized software—often open-source programs created by communities such as DIY Drones, which I founded, rather than costly code from the aerospace industry.
Drones can provide farmers with three types of detailed views. First, seeing a crop from the air can reveal patterns that expose everything from irrigation problems to soil variation and even pest and fungal infestations that aren’t apparent at eye level. Second, airborne cameras can take multispectral images, capturing data from the infrared as well as the visual spectrum, which can be combined to create a view of the crop that highlights differences between healthy and distressed plants in a way that can’t be seen with the naked eye. Finally, a drone can survey a crop every week, every day, or even every hour. Combined to create a time-series animation, that imagery can show changes in the crop, revealing trouble spots or opportunities for better crop management.
Top: A drone from Precision Hawk is equipped with multiple sensors to image fields.

Bottom: This image depicts vegetation in near-­infrared light to show chlorophyll levels.



It’s part of a trend toward increasingly data-driven agriculture. Farms today are bursting with engineering marvels, the result of years of automation and other innovations designed to grow more food with less labor. Tractors autonomously plant seeds within a few centimeters of their target locations, and GPS-guided harvesters reap the crops with equal accuracy. Extensive wireless networks backhaul data on soil hydration and environmental factors to faraway servers for analysis. But what if we could add to these capabilities the ability to more comprehensively assess the water content of soil, become more rigorous in our ability to spot irrigation and pest problems, and get a general sense of the state of the farm, every day or even every hour? The implications cannot be stressed enough. We expect 9.6 billion people to call Earth home by 2050. All of them need to be fed. Farming is an input-­output problem. If we can reduce the inputs—water and pesticides—and maintain the same output, we will be overcoming a central challenge.
Agricultural drones are becoming a tool like any other consumer device, and we’re starting to talk about what we can do with them. Ryan Kunde wants to irrigate less, use less pesticide, and ultimately produce better wine. More and better data can reduce water use and lower the chemical load in our environment and our food. Seen this way, what started as a military technology may end up better known as a green-tech tool, and our kids will grow up used to flying robots buzzing over farms like tiny crop dusters.

Wednesday addams series Wednesday in short

 Follow this link to watch the Wednesday Netflix series summaru fully explained-  https://youtu.be/c13Y4XLs_AY