For months, citizen scientists in Japan have been trying to shine a brighter public spotlight on radiation readings from the region surrounding the Fukushima nuclear disaster site, in part because there was so little information coming from the Japanese government in the days following the March 11 earthquake and tsunami that touched off the crisis.
Now Yahoo Japan is adding to that spotlight by offering a map-centric database of readings via Radiation.Yahoo.co.jp.
Kyodo News Service quotes an official from Yahoo Japan as saying that the beta service displays data gathered at 11 locales, including Tokyo, Nihonmatsu in Fukushima Prefecture, Sendai and the city of Chiba. More observation points will be added in the future, the official is quoted as saying.
Sean Bonner, a Los Angeles-based organizer for the non-governmental Safecast radiation-monitoring project, said his group is making a significant contribution to Yahoo Japan's service — even though it wasn't acknowledged in the Kyodo report.
"It says the data was collected by Keio University, but in fact it's the data that we (Safecast) collected and Keio is helping with," Bonner told me in an email. "It's the same data that we are displaying on our site, and was collected with the devices we designed and installed."
A robot outfitted with a rudimentary brain-like neural network is able to tackle new tasks by calling on its past experience and knowledge to think and act for itself.
This breakthrough demonstrates the evolving ability of robots to adapt to ever-changing environments, according to Osamu Hasegawa, an associate professor at the Tokyo Institute of Technology who is developing the technology.
"So far, robots, including industrial robots, have been able to do specific tasks quickly and accurately. But if their environment changes slightly, robots like that can't change," Hasegawa said in a press release.
In this video, the robot uses its artificial intelligence to pour a glass of "water" (beads, actually, since water and electronics don't mix all that well) and then, mid-task and with its hands full, it's told to make the water cold. What to do? It spies the "ice cube" on a nearby tray and decides to put down the bottle so it can pick up the ice cube and put in the glass.
Hasegawa's robot is similar to the Bakerbot reported on earlier this week that is able to make and bake a cookie almost from scratch, using code that enables it to determine where ingredients are, pour and mix them together, and place them in the oven.
Hasegawa's team has developed an algorithm called a Self Organizing Incremental Neural Network, or SOINN, to do the thinking. The network obtains information from the robot's visual, auditory and tactile sensors. In addition, it does what people do these days: goes online and chats with others (robots in this case).
So, for example, let's say this robot in Japan is asked to make a cup of tea. It doesn't know how, so it goes online and learns from a robot in London how to make a perfect cup of English tea. But, since it's in Japan, the robot knows this isn't quite right.
Based on its past experience and surroundings, Hasegawa said, "we think this robot will become able to transfer that knowledge to its immediate situation, and make green tea using a Japanese teapot."
Pakistanis and international and local media gather outside Osama Bin Laden's compound, where he was killed during a raid by U.S. Special Forces on May 3 in Abottabad, Pakistan. Bin Laden was killed during a U.S. military mission May 2)
By John Roach, Contributing Writer, NBC News
Osama bin Laden regularly taunted with propaganda photos and videos that left us asking: where in the world is he? U.S. spy agencies want software that analyzes and quickly identifies where such imagery was made.
Currently, human intelligence analysts pore over propaganda imagery to tease out clues from things such as the geography, vegetation and even the style of clothes worn and gadgets used, and try to match them up with existing images taken from satellites and on the ground.
Google recently launched an image search engine billed as being able to do the type of task IARPA wants, but it's full of kinks. It can't even distinguish George W. Bush from Barack Obama. Reverse image search engine TinyEye is designed to perform a similar function.
IARPA says these types of consumer-oriented systems are limited because they "tend to work best in geographic areas with significant population densities or that are well traveled by tourists, and where the query image or video contains notable features such as mountains or buildings."
The Finder Program, as its wished-for software is called, "will deliver rigorously tested solutions for the image/video geolocation task in any outdoor terrestrial location."
Work on the program is scheduled to kick off in earnest next January. That hoped-for solution isn't expected until 2016. But when it comes, and even as researchers work on the task, terrorists will have to be ever more careful about what to include in their propaganda imagery.
Archaeologists talk about their underwater discovery off the coast of Panama.
By Alan Boyle, Science Editor, NBC News
It may not be a $500 million golden hoard, but underwater archaeologists are nevertheless excited about finding what they believe are traces of the five ships that British privateer Henry Morgan lost off the coast of Panama in 1671.
The discovery was made at the mouth of Panama's Chagres River, near another underwater site where six iron cannons were found. Taken together, the evidence suggests that the three-century-old story of Captain Morgan's lost fleet is finally near its conclusion.
The story begins with Morgan, a Welsh sea captain who was given the British crown's official sanction to prey on Spanish sea trade. Some would call Morgan a pirate, others a buccaneer, but "privateer" is the more charitable term.
In 1671, Morgan aimed to weaken Spain's control of the Caribbean by sacking Panama City, and the first step was to capture Castillo de San Lorenzo, a Spanish fort on the cliff overlooking the entrance to the Chagres River. That river served as the only water passageway between the Caribbean and the capital.
Morgan and his pirates of the Caribbean took over the fort and went on to overwhelm the city's defenders. But in the process, he lost his flagship and four other ships to the rough seas and shallow reef surrounding the fort.
From there on, the story takes some dark twists and turns. Morgan had to move on to Panama City, abandoning the sinking ships. When the British buccaneers finally took over the city, they discovered that Spanish authorities had moved much of their treasure out to sea, beyond their reach. That made Morgan's men angry. Their mistreatment of the local citizenry in the wake of the "Sack of Panama" added to Morgan's disreputable image.
By the time he died in 1688, Morgan was seen as one of the most bloodthirsty (and most successful) pirates in the Americas. His exploits inspired enough pirate tales to fill a dead man's chest, including the Errol Flynn movie "Captain Blood" and the James Bond novel "Live and Let Die."
Any riches that may have been on Morgan's ships are thought to be long gone, thanks to treasure hunters who have plucked gold coins and other booty from the shallow waters of the Lajas Reef. But a team of U.S. archaeologists has been working to locate Morgan's ships and help the Panamanian government preserve the remaining artifacts.
'The story is the treasure' "To us, the ship is the treasure — the story is the treasure," said Fritz Hanselman, an archaeologist with the River Systems Institute and the Center for Archaeological Studies at Texas State University. "And you don't have a much better story than Captain Henry Morgan's Sack of Panama City and the loss of his five ships."
Captain Morgan / Chris Bickford
A team of underwater archaeologists study the wreckage of a ship they believe to be part of Captain Henry Morgan's lost fleet. The dive team discovered part of the starboard side of a 17th-century wooden ship hull and a series of unopened cargo boxes and chests encrusted in coral.
Volunteers from the National Park Service's Submerged Resources Center and the NOAA/UNC-W Aquarius Reef Base are working alongside Hanselman and other archaeologists and divers from Texas State University.
They knew they were on the right track last year when they discovered the 17th-century cannons. The experts widened their search, using a magnetometer that could pick up the signatures of objects buried beneath the sand and mud on the river bottom. Eventually, divers came upon a 52-by-22-foot section from the starboard side of a wooden ship's hull, along with unopened cargo boxes and chests encrusted in coral.
"We got really excited," Hanselman said in a video recounting the find.
Bert Ho, a survey archaeologist at the National Park Service, said the story behind the shipwrecks is being uncovered slowly through a series of dives. "Each dive tells us a little bit more, each archaeological drawing, each measurement — it all adds up," he said. "It's telling us the story of the wreck, the origin of the wreck, and hopefully the name of the wreck."
Captain Morgan / Chris Bickford
Bert Ho, an underwater project survey archaeologist with the National Park Service's Submerged Resources Center, based in Denver, maps the shipwreck with drawings using synthetic calque paper and plastic lead pencils.
Yo ho ho and a bottle of rum The extended search has been supported by a grant from the makers of Captain Morgan Rum, which was named after the 17th-century privateer.
"Captain Henry Morgan was a natural-born leader with a sense of adventure and an industrious spirit that the brand embraces today,” Tom Herbst, brand director for Captain Morgan USA, said in a statement. "When the opportunity arose for us to help make this discovery mission possible, it was a natural fit for us to get involved. The artifacts uncovered during this mission will help bring Henry Morgan and his adventures to life in a way never thought possible."
Herbst's company may win a share of the publicity for its role in the search for Captain Morgan's fleet, but it won't get any of the booty: Any artifacts excavated by the dive team belong the Panamanian government, to be preserved and displayed by the Patronato Panama Viejo.
Using data from the VISTA infrared survey telescope at the European Southern Observatory's Paranal Observatory in Chile, an international team of astronomers has discovered 96 new open clusters hidden by the dust in the Milky Way. Thirty of the clusters are shown in this mosaic.
If you're looking for hidden treasures, the dusty disk of our Milky Way galaxy might not be the first place you'd look. But that's exactly where the European Southern Observatory found almost a hundred glittering prizes.
These 30 pictures show just a portion of the treasure trove: 96 open star clusters hiding in the galaxy's dusty core. These stars can't be seen in the visible-light spectrum because they're shrouded within clouds of dust, but the ESO's VISTA infrared survey telescope is able to see through the dust. And that's not all: Sophisticated software was able to remove the glare of foreground stars, allowing the dimmer clusters to stand out.
Why go to all that trouble? Well, astronomers surmise that the majority of stars that are at least 50 percent bigger than our own sun are formed within these types of open clusters, and yet not that many of them have been seen — primarily due to all that pesky dust. Getting a better read on the distribution and composition of open clusters will provide new pieces to the puzzle of our galaxy's formation.
"We found that most of the clusters are very small and only have about 10 to 20 stars. Compared to typical open clusters, these are very faint and compact objects — the dust in front of these clusters makes them appear 10,000 to 100 million times fainter in visible light. It’s no wonder they were hidden," Radostin Kurtev, a member of the team making the observations, said in today's image advisory from the ESO.
The team's findings are to be published in the journal Astronomy & Astrophysics. But these discoveries may well be merely a first taste of the treasure. "We’ve just started to use more sophisticated automatic software to search for less concentrated and older clusters," said Jura Borissova, the lead author of the study. "I am confident that many more are coming soon."
The future of robots is shaping up to be wonderful for couch potatoes: they can fetch beers, fold laundry, and now they can even bake cookies.
This latest breakthrough comes from the Distributed Robotics Lab at MIT, where graduate student Mario Bollini is plugging away at code that allows robots to make decisions for themselves as they accomplish specific tasks.
The Bakerbot, which is a Willow Garage PR2 robot, represents a hybrid approach to this end goal, he said. The robot knows, for example, that four bowls with cookie ingredients are on the table as well as a mixing bowl and a cookie sheet.
"All the manipulation is done on the fly," he said. It calculates, for example, how to pick up the bowls with ingredients and pour them into the mixing bowl, mix them together, and put them in the oven. The result is a baked cookie — not the prettiest cookie in the world, but nevertheless a baked cookie.
Ultimately, researchers would like to use the knowledge (and code) gained from this Bakerbot project and use it to design a robot that would know what to do when asked to bake a cake, for example.
"It would try to understand that, find a recipe for that, and it would try to understand what the recipe is telling it to do and then use actions that it knows how to do to accomplish it," Bollini said.
Beyond baking, robots with these types of skills are already being eyed for factory jobs. Current robots on the assembly line are programmed to do one task over and over again. If someone gets in its way, they get hit. And if they need to do a different task, they have to be completely reprogrammed.
A more dynamic robot could be useful, for example, on an auto assembly line where robots install windshields of all shapes and sizes on several different models of cars and do so without crashing into each other and their human colleagues.
In the more distant future, Bakerbot really might find its home inside a home, particularly for elder care in countries with aging populations such as Japan, Bollini noted. There, they'll likely start out doing cooperative tasks, not baking cookies all by themselves.
"If you're not strong enough to lift the mixing bowl and put it out, the robot will do that part of the task and then the human does things that are easier for the person to do, like recognize where everything is and get it out of the cupboards," he said.
This result might actually be the prettiest cookie in the world.
The smaller object in this photo is thought to be a brown-dwarf companion that orbits the distant star Gliese 229. Some have worried that our sun has a similar companion whose gravitational effect periodically sends more comets toward Earth — but the latest analysis of cometary data shows no sign of such an effect.
By Alan Boyle, Science Editor, NBC News
Doomsayers have been wringing their hands for years over the possibility that an unseen companion to our sun periodically diverts a hail of comets toward Earth, sparking mass extinctions like cosmic clockwork. Now an astronomer has shown that the evidence for such a cycle in the flux of comets or asteroids doesn't actually exist.
The research is the latest knock against claims that the dark companion, nicknamed Nemesis or the "Death Star," might be out to get us in 2012.
Like many other 2012 myths, the Nemesis hypothesis had a smidgen of scientific research behind it. Back in 1984, paleontologists proposed that there seemed to be a 27 million-year cycle of extinctions that may have had an extraterrestrial cause. The prime suspect was a hypothetical brown dwarf or red dwarf that disrupted the orbits of comets on the solar system's fringe and sent them screaming earthward.
Nemesis has gotten swept up with the Planet X hypothesis, which holds that an as-yet-undetected planet will wreak havoc on Earth — and both those hypotheses have fed into worries about a 2012 apocalypse supposedly foretold by the ancient Maya calendar.
You've probably already figured out that the worries are totally bogus, and not just because the "long count" calendar used by the Maya was merely a calendar and not a fortune-telling device.
Last year, researchers reported that if the Nemesis companion existed, it wouldn't orbit in a nice, precise 27 million-year cycle. That study, published in the Royal Astronomical Society Letters, was portrayed as the "final nail in the coffin" for the Nemesis hypothesis. But the researchers still couldn't explain why extinctions seemed to peak every 27 million years.
"For me, it's a complete head-scratcher," University of Kansas physicist Adrian Melott said at the time.
Don't panic Now a researcher at Germany's Max Planck Institute for Astronomy, Coryn Bailer-Jones, essentially says that Melott can stop with the scratching. His analysis, published in the Monthly Notices of the Royal Astronomical Society, suggests that the seeming periodicity may look like a pattern but actually is a statistical artifact.
"There is a tendency for people to find patterns in nature that do not exist," Bailer-Jones said in a report from the Max Planck Institute. "Unfortunately, in certain situations traditional statistics plays to that particular weakness."
Bailer-Jones looked at variations in the rate of cratering on our planet over time, using an alternative method for evaluating probabilities known as Bayesian statistical analysis. Bayesian analysis provides a reality check for statisticians who think they see patterns in their data, and in this case, the analysis ruled out simple periodic variations. Instead, the figures pointed to a steady trend of increased cratering over the past 250 million years.
Bailer-Jones said there are two possible explanations for the perceived rise: It may be that when you're looking at smaller craters, the older ones are harder to spot due to erosion. That would leave you with the impression that an increasing number of small craters are being created as you go forward through time. And in fact, Bailer-Jones said the trend toward more craters seemed to go away "if we look only at craters larger than 35 kilometers and younger than 400 million years, which are less affected by erosion and in-filling."
The other explanation would be that the increase in the cratering rate is real. The institute said some analyses of craters on the moon, where the scars left behind by cosmic collisions are not subject to erosion or in-filling, suggest the impact rate may be rising. But if scientists accept that explanation, they're left with another head-scratcher: What's causing the rising rate?
"From the crater record, there is no evidence of Nemesis," Bailer-Jones said. "What remains is the intriguing question of whether or not impacts have become ever more frequent over the past 250 million years."
Update for 9:05 p.m. ET: Over at the Bad Astronomy blog, Phil Plait clearly explains the impact (heh, heh) of Bayesian analysis on the cratering question:
"This is different than standard statistics, and is less prone to bias due to uncertainties in age and size of craters. In using standard statistics, clusters in crater ages can always be found, but it’s hard to know if that’s just a random clump or has an actual physical cause — like flipping a coin 10 times and having it come up heads 5 times in a row. It’s unlikely, but how do you know if it’s coincidence or not? Bayesian methods circumvent that issue."
Leave it to the computer scientists to turn baby pictures into a slick animation that traces faces through the years.
The technique is already being put to use on Google's Picasa photo-sharing website as a feature called "Face Movie."
"I have 10,000 photos of my 5-year-old son, taken over every possible expression," Steve Seitz, a computer science and engineering professor at the University of Washington and an engineer at Google's Seattle office, said today in a news release about the research project. "I would like to visualize how he changes over time, be able to see all the expressions he makes, be able to see him in 3-D or animate him from the photos."
Seitz and his colleagues have already started down that road, thanks to the university's "Photobios" project. UW researcher Ira Kemelmacher-Shlizerman is due to present their research next week in Vancouver, B.C., at a meeting of the ACM Special Interest Group on Computer Graphics and Interactive Techniques, or SIGGRAPH.
Rapid advances in image recognition Photobios take advantage of rapid advances in automated image recognition and tagging. In the past, such advances led to the development of Microsoft's Photosynth technology for re-creating clickable 3-D scenes from a "cloud" of images taken from many different angles. (Microsoft and NBC Universal are partners in the msnbc.com joint venture.)
Picasa's Face Movie feature takes that one step further by building in face recognition and name-tagging.
"This work provides a motivation for tagging," Seitz said. "The bigger goal is to figure out how to browse and organize your photo collection. I think this is just one step toward that bigger goal."
To build a Photobio, you'd start with a collection of photos showing the same person, whether they show your daughter or George W. Bush. You can arrange the photos chronologically, or specify the beginning and end points. The software automatically identifies the face and major features, lines up the eyes and morphs smoothly from one image to the next. Automated morphing is one of the key reasons why the results are so easy to produce and easy on the eyes.
"There's been a lot of interest in the computer vision community in modeling faces, but almost all of the projects focus on specially acquired photos, taken under carefully controlled conditions," Seitz said. "This is one of the first papers to focus on unstructured photo collections, taken under different conditions, of the type that you would find in iPhoto or Facebook."
Better 3-D avatars Kemelmacher-Shlizerman and Seitz are already working on the next step: taking a collection of photos and turning them into a movable 3-D model of a face. They'll be presenting research on that topic this fall at the International Conference on Computer Vision in Barcelona, Spain.
The researchers say such models could be used to create more realistic animated avatars for use in video conferencing or game play. More accurate face modeling also could well lead to improved face-recognition programs, for personal use (that is, sorting through the pictures on your hard drive or Facebook friend list) as well as for security applications (that is, matching up a photo taken at a security checkpoint with a database of images taken from different perspectives).
Does that sound cool, or scary? Feel free to weigh in with your thoughts on face-tracing and other image-recognition applications you'd like to see.
In addition to Seitz and Kemelmacher-Shlizerman, the authors of the SIGGRAPH presentation, "Exploring Photobios," include Eli Shechtman and Rahul Garg. The research was funded by Google, Microsoft, Adobe Systems and the National Science Foundation.
"The Champ" may not show up on the critics' all-time top-ten lists, but for many scientists, the 1979 flick about a beat-up boxer and his boy is considered the classic tear-jerker — so classic that a clip from the movie serves as the scientific standard for inducing sadness. But how did "The Champ" win its crown? And is it still a contender?
The "saddest movie in the world" has been the focus of Internet buzz ever since last month's Smithsonian.com report, which noted that the film has popped up in a wide variety of studies of depression and grief. For example, "The Champ" played a role in determining that depressed people aren't really more likely to cry than non-depressed people, and that people are more likely to spend money when they're sad.
That's not to say that the experimental subjects were forced to watch the whole 121-minute movie. Psychologists just used just used a 171-second clip in which the boxer (Jon Voight) goes down for the count, turning on the tears from his son (played by 9-year-old Ricky Schroder, in a performance that won him a Golden Globe). The scene was one of more than 250 film clips selected by psychologists James Gross and Robert Levenson on the basis of recommendations from movie critics, video-store employees and film buffs.
During the late '80s and early '90s, the researchers refined their list and ended up showing 78 clips to 494 undergraduates. Gross and Levenson hoped that various movies would get strong thumbs-up for eliciting amusement, or fear, or sadness, or contentment — but they didn't always hit the mark. For example, their top fear-inducing movies, "The Shining" and "Silence of the Lambs," ended up sparking too many other emotions as well.
In contrast, "The Champ" performed like ... well, you know. The movie "produced levels of sadness that were much greater than those for any other emotion," they wrote in their seminal 1995 paper, "Emotion Elicitation Using Films."
Even though that research is now 16 years old, it's been cited more than 300 times in other scientific articles, and Schroder's cry-fest is still being used as a downer in the lab. (For what it's worth, the best film on Gross and Levenson's list for eliciting amusement is the fake-orgasm scene from "When Harry Met Sally.")
The fake-orgasm scene from "When Harry Met Sally" rates high on the amusement scale.
Knowing which movies are reliably amusing or depressing is important for psychology experiments, because movies provide a relatively painless way to elicit a variety of emotions — especially the negative ones. Showing someone a sad film clip won't leave lasting mental scars. When you compare it with some of the other methods that can spark feelings of fear, anxiety or anger, such as drugs or electric shocks, the choice is a no-brainer.
But isn't it time to rerun the experiment with a new set of movies? What seemed sad or funny in the '80s may seem sadly dated or unintentionally funny in 2011. And indeed, clips from other flicks such as "Steel Magnolias" and "John Q." have stood in for "The Champ" in some recent studies of sadness. If you have any suggestions for the saddest movie scene ever (or film clips that are the best for inducing fear, amusement or contentment), feel free to list them in your comments below.
Someday, some scientist just might decide to do a sequel to the sad-movie saga. Will a new top tear-jerker rise up for a new generation?
"I know that others have been working on this (as have we)," Levenson, director of the Institute of Personality and Social Research at the University of California at Berkeley, told me in an email, "but I believe the champ still is 'The Champ.'"
Update for 5:15 p.m. ET Aug. 7: Stanford psychologist Sylvia Kreibig, who has done extensive research on emotion-inducing films, got back to me with this email:
"We have worked with two film clips for inducing sadness in our own research (Kreibig, Wilhelm, Roth, & Gross, 2007, 2011; Kolodyazhniy, Kreibig, Roth, Gross, & Wilhelm, 2011), 'Steel Magnolias' and 'John Q.' On a scale from zero (not at all) to 10 (extremely), these films received an average rating of 6.14, with a standard deviation of 2.13. However, we did not compare these films to 'The Champ.' 'Steel Magnolias' has been used in a number of other experiments for studying sadness and has been found to be effective. A study by Goldin, Hutcherson, Ochsner, Glover, Gabrieli, & Gross (2005) tested the neural bases of sadness using 'The Champ' for inducing sadness, which might be of interest to you.
"Besides the Gross & Levenson 1995 paper, there have been at least three other large-scale research studies on validating film clips for emotion induction, including a target category of sadness:
In 1999, Hagemann, Naumann, Maier, Becker, Lürken, & Bartussek found the clip from 'The Champ' to elicit the strongest ratings for sadness among a selection of three sadness-inducing film clips (M=6.64; SD=2.35, on a scale from 0 to 9) and to be highly specific for eliciting the target emotion (90 percent hit rate). These results are based on a German sample, demonstrating that this film clip is effective across national and cultural borders.
Hewig, Hagemann, Seifert, Gollwitzer, Naumann, Bartussek (2005) again tested a set of three sadness-inducing film clips (among other emotion-inducing films) and again found 'The Champ' to be very effective in inducing sadness (M = 7.21, SD = 2.07 on a scale from 0 to 9), while not strongly eliciting other emotions (again in a German sample).
In a more recent study by a Belgium research team using film clips in French, Schaefer, Nils, Sanchez, Philippot (2009) reported a clip from 'City of Angels' (mean of 2.32 on a scale from 1 to 7) to be the most effective, considering discreteness of elicited feelings and mean feeling self-report, but their selection did not include 'The Champ.'
"You see that different film sets and different scales for rating have been used in these studies. There are ways for mapping different scales onto one common scale in order to compare these values, but then the question still remains whether people would nowadays still rate 'The Champ' clip as the strongest sadness-inducing film clip. 'The Champ' seems to be fairly robust against changes in preferences of cinematographic style, as much has changed in the movie world since the film's shooting in 1979. And psychological distance/immersion might influence a film's effectiveness in inducing the targeted emotion, rather than amusement at watching the film clip or nostalgia for times past ... So emotion induction in the psychology laboratory remains to be a challenging issue!"
A nearly new moon takes on an otherworldly glow in a picture taken from the International Space Station. "This is what the moon looked like 16 times today," astronaut Ron Garan writes.
By Alan Boyle, Science Editor, NBC News
Common sights like the streets of New York or a setting moon take on an unearthly look when they're seen from the International Space Station.
This photo of the just-past-new moon was taken after one of Sunday's sunsets by Ron Garan, one of the six astronauts aboard the space station.
It's just "one of" the day's sunsets because the station circles Earth every hour and a half, passing through multiple cycles of day and night, sunrise and sunset. The sun's wavelengths are refracted by the edge of Earth's atmosphere to produce a beautiful display of red and blue rising up from the horizon toward the moon. Even the dark of the moon is slightly light, thanks to the "Earthshine" reflected by our planet's surface.
"This is what the moon looked like 16 times today from space," Garan wrote.
Garan's pictures serve as a reminder that NASA's human spaceflight program is alive and well despite this month's retirement of the space shuttle fleet. Americans, Russians and spacefliers from other countries are due to continue their work in orbit for years to come, supported by Russian, European and Japanese transports — and soon by commercial U.S. spaceships as well.
During the current rotation, Garan has been serving as the six-person crew's unofficial photographer, taking over from Italian astronaut Paolo Nespoli. Garan's orbital snapshots appear on his Twitpic page, and you'll find many more musings about life in space on his website, Fragile Oasis.
Right now Garan is in the midst of a seriesofblogpostings about "the next chapter of human spaceflight," he's working on zero-gravity experiments focusing on fuel efficiency and plant growth, and he's also getting set to play a supporting role inside the station during this week's Russian-led spacewalk. But he still found time to take awesome pictures of these earthly scenes from nearly 250 miles (400 kilometers) abpve.
Ron Garan / NASA
The boroughs of New York City are on display in this image captured from the International Space Station. "Looks like it was a great day in the Big Apple from space," NASA astronaut Ron Garan writes.
Ron Garan / NASA
Greece, Turkey and their surroundings are spread out in shades of blue and brown in this space-station view. "From the Black Sea to the Nile to Libya, a wonderful view of our fragile oasis," NASA astronaut Ron Garan writes.
"You're struck by the indescribable beauty of our planet," Garan told the New York Daily News' Mike Jaccarino. "You feel this overwhelming gratitude that we've been given this gift. It fills me with some sadness, too, though, at how we've treated this gift, to see how fragile it is, and see that paper-thin atmosphere.
"I wish everybody could see this with their own eyes."
Until then, Garan and his fellow fliers will just have to keep on giving us the next-best thing.
Remember Ancient Lives, the project that's recruiting Internet users to help decipher ancient texts on fragments of papyrus? One of those texts is a "lost" gospel passage about Jesus casting out demons, while another is a "lost" play by Euripides. Oxford papyrologist James Brusuelas emailed me a few more details about those manuscripts, and I've added them as an appendix to last week's Cosmic Log posting.