Last month, a group of Australian scientists published a warning to the citizens of the country and of the world who collectively gobble up some $34 billion annually of its agricultural exports. The warning concerned the safety of a new type of wheat. As Australia’s number-one export, a $6-billion annual industry, and the most-consumed grain locally, wheat is of the utmost importance to the country. A serious safety risk from wheat – a mad wheat disease of sorts – would have disastrous effects for the country and for its customers. Which is why the alarm bells are being rung over a new variety of wheat being ushered toward production by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) of Australia. In a sense, the crop is little different than the wide variety of modern genetically modified foods. A sequence of the plant’s genes has been turned off to change the wheat’s natural behavior a bit, to make it more commercially viable (hardier, higher yielding, slower decaying, etc.). Franken-Wheat? What’s really different this time – and what has Professor Jack Heinemann of the University of Canterbury, NZ, and Associate Professor Judy Carman, a biochemist at Flinders University in Australia, holding press conferences to garner attention to the subject – is the technique employed to effectuate the genetic change. It doesn’t modify the genes of the wheat plants in question; instead, a specialized gene blocker interferes with the natural action of the genes. The process at issue, dubbed RNA interference or RNAi for short, has been a hotbed of research activity ever since the Nobel Prize-winning 1997 research paper that described the process. It is one of a number of so-called “antisense” technologies that help suppress natural genetic expression and provide a mechanism for suppressing undesirable genetic behaviors. RNAi’s appeal is simple: it can potentially provide a temporary, reversible off switch for genes. Unlike most other genetic modification techniques, it doesn’t require making permanent changes to the underlying genome of the target. Instead, specialized siRNAs – chemical DNA blockers based on the same mechanism our own bodies use to temporarily turn genes on and off as needed – are delivered into the target organism and act to block the messages cells use to express a particular gene. When those messages meet with their chemical opposites, they turn inert. And when all of the siRNA is used up, the effect wears off. The new wheat is in early-stage field trials (i.e., it’s been planted to grow somewhere, but has not yet been tested for human consumption), part of a multi-year process on its way to potential approval and not unlike the rigorous process many drugs go through. The researchers responsible are using RNAi to turn down the production of glycogen. They are targeting the production of the wheat branching enzyme which, if suppressed, would result in a much lower starch level for the wheat. The result would be a grain with a lower glycemic index – i.e., healthier wheat. This is a noble goal. However, Professors Heinemann and Carman warn, there’s a risk that the gene silencing done to these plants might make its way into humans and wreak havoc on our bodies. In their press conference and subsequent papers, they describe the possibility that the siRNA molecules – which are pretty hardy little chemicals and not easily gotten rid of – could wind up interacting with our RNA. If their theories prove true, the results might be as bad as mimicking glycogen storage disease IV, a super-rare genetic disorder which almost always leads to early childhood death. “Franken-Wheat Causes Massive Deaths from Liver Failure!” Now that is potentially headline-grabbing stuff. Unfortunately, much of it is mere speculation at this point, albeit rooted in scientific expertise on the subject. What they’ve produced is a series of opinion papers – not scientific research nor empirical data to prove that what they suspect might happen, actually does. They point to the possibilities that could happen if a number of criteria are met: If the siRNAs remain in the wheat in transferrable form, in large quantities, when the grain makes it to your plate. And… If the siRNA molecules interfere with the somewhat different but largely similar human branching enzyme as well. Then the result might be symptoms similar to such a condition, on some scale or another, anywhere from completely unnoticeable to highly impactful. They further postulate that if the same effect is seen in animals, it could result in devastating ecological impact. Dead bugs and dead wild animals. Luckily for us, as potential consumers of these foods, all of these are easily testable theories. And this is precisely the type of data the lengthy approval process is meant to look at. Opinion papers like this – while not to be confused with conclusions resulting from solid research – are a critically important part of the scientific process, challenging researchers to provide hard data on areas that other experts suspect could be overlooked. Professors Carman and Heinemann provide a very important public good in challenging the strength of the due-diligence process for RNAi’s use in agriculture, an incomplete subject we continue to discover more about every day. However, we’ll have to wait until the data come back on this particular experiment – among thousands of similar ones being conducted at government labs, universities, and in the research facilities of commercial agribusinesses like Monsanto and Cargill – to know if this wheat variety would in fact result in a dietary apocalypse. That’s a notion many anti-genetically modified organism (GMO) pundits seem to have latched onto following the press conference the professors held. But if the history of modern agriculture can teach us anything, it’s that far more aggressive forms of GMO foods appear to have had a huge net positive effect on the global economy and our lives. Not only have they not killed us, in many ways GMO foods have been responsible for the massive increases in public health and quality of life around the world. The Roots of the GMO Food Debate The debate over genetically modified (GM) food is a heated one. Few contest that we are working in somewhat murky waters when it comes to genetically modified anything, human or plant alike. At issue, really, is the question of whether we are prepared to use the technologies we’ve discovered. In other words, are we the equivalent of a herd of monkeys armed with bazookas, unable to comprehend the sheer destructive power we possess yet perfectly capable of pulling the trigger? Or do we simply face the same type of daunting intellectual challenge as those who discovered fire, electricity, or even penicillin, at a time when the tools to fully understand how they worked had not yet been conceived of? In all of those cases, we were able to probe, study, and learn the mysteries of these incredible discoveries over time. Sure, there were certainly costly mistakes along the way. But we were also able to make great use of them to advance civilization long before we fully understood how they worked at a scientific level. Much is the same in the study and practical use of GM foods. While the fundamentals of DNA have been well understood for decades, we are still in the process of uncovering many of the inner workings of what is arguably the single most advanced form of programming humans have ever encountered. It is still very much a rapidly evolving science to this day. For example, in the 1990s, an idea known simply as “gene therapy” – really a generalized term for a host of new-at-the-time experimental techniques that share the simple characteristic of permanently modifying the genetic make-up of an organism – was all the rage in medical study. Two decades on, it’s hardly ever spoken of. That’s because the great majority of attempted disease therapies from genetic modification failed, with many resulting in terrible side effects and even death for the patients who underwent the treatments. Its use in the early days, of course, was limited almost exclusively to some of the world’s most debilitating, genetically rooted diseases. Still – whether in their zeal to use a fledgling tool to cure a dreadful malady or in selfish, hurried desire to be recognized among the pioneers of what they thought would be the very future of medicine – doctors chose to move forward at a dangerous pace with gene therapy. In one famous case, and somewhat typical of the times, University of Pennsylvania physicians enrolled a sick 18-year-old boy with a liver mutation into a trial for a gene therapy that was known to have resulted in the deaths of some of the monkeys it had just been tested on. The treatment resulted in the young man’s death a few days later, and the lengthy investigation that followed resulted in serious accusations of what can only be called “cowboy medicine.” Not one of science’s prouder moments, to be sure. But could GM foods be following the same dangerous path? After all, the first GM foods made their way to market during the same time period. The 1980s saw large-scale genetic-science research and experimentation from agricultural companies, producing everything from antibiotic-resistant tobacco to pesticide-hardy corn. After much debate and study, in 1994 the FDA gave approval to the first GM food to be sold in the United States: the ironically named Flavr Savr tomato, with its delayed ripening genes which made it an ideal candidate for sitting for days or weeks on grocery store shelves. Ever since, there has been a seeming rush of modified foods into the marketplace. Modern GM foods include soybeans, corn, cotton, canola, sugar beets, and a number of squash and greens varieties, as well as products made from them. One of the most prevalent modifications is to make plants glyphosate-resistant, or in common terms, “Roundup Ready.” This yields varieties that are able to stand up to much heavier doses of the herbicide Roundup, which is used to keep weeds and other pest plants from damaging large monoculture fields, thereby reducing costs and lowering risks. In total it is estimated that modern GM crops have grown to become a $12 billion annual business since their commercialization in 1994, according to the International Service for the Acquisition of Agri-biotech Applications (ISAAA). Over 15 million farms around the world are reported to have grown GM crops, and their popularity increases every year. They’ve brought huge improvements in shelf life, pathogen and other stress resistance, and even added nutritional benefits. For instance, yellow rice – which was the first approved crop with an entirely new genetic pathway added artificially – provides beta-carotene to a large population of people around the world who otherwise struggle to find enough in their diets. However, the race for horticulturalists to the genetic table in the past few decades – what could be described accurately as the transgenic generation of research – has by no means been our first experiment with the genetic manipulation of food. In fact, if anything, it is a more deliberate, well studied, and careful advance than those that came before it. A VERY Brief History of Genetically Modified Food Some proponents of GMO foods are quick to point out that humans have been modifying foods at the genetic level since the dawn of agriculture itself. We crossbreed plants with each other to produce hybrids (can I interest you in a boysenberry?). And of course, we select our crops for breeding from those with the most desirable traits, effectively encouraging genetic mutations that would have otherwise resulted in natural failure, if not helped along by human hands. Corn as we know it, for example, would never have survived in nature without our help in breeding it. Using that as a justification for genetic meddling, however, is like saying we know that NASCAR drivers don’t need seatbelts because kids have been building soapbox racers without them for years. Nature, had the mix not been near ideal to begin with, would have prevented such crossbreeding. Despite Hollywood’s desires, one can’t simply crossbreed a human and a fly, or even a bee and a mosquito, for that matter – their genetics are too different to naturally mix. And even if it did somehow occur, if it did not make for a hardier result, then natural selection would have quickly kicked in. No, I am talking about real, scientific genetic mucking – the kind we imagined would result in the destruction of the world from giant killer tomatoes or man-eating cockroaches in our B-grade science-fiction films. Radiation mutants. Enterprising agrarians have been blasting plants with radiation of all sorts ever since we started messing around with atomic science at the dawn of the 20th century. In the 1920s, just when Einstein and Fermi were getting in their grooves, Dr. Lewis Stadler at the University of Missouri was busy blasting barley seeds with X-rays – research that would usher in a frenzy of mutation breeding to follow. With the advent of nuclear technology from the war effort, X-rays expanded into atomic radiation, with the use of gamma rays leading the pack. The United States even actively encouraged the practice for decades, through a program dubbed “Atoms for Peace” that proliferated nuclear technology throughout various parts of the private sector in a hope that it would improve the lives of many. And it did. Today, thousands of agricultural varieties we take for granted – including, according to a 2007 New York Times feature on the practice, “rice, wheat, barley, pears, peas, cotton, peppermint, sunflowers, peanuts, grapefruit, sesame, bananas, cassava and sorghum” – are a direct result of mutation breeding. They would not be classified as GM foods, in the sense that we did not use modern transgenic techniques to make them, but they are genetically altered nonetheless, to the same or greater degree than most modern GMO strains. Unlike modern GM foods – which are often closely protected by patents and armies of lawyers to ensure the inventing companies reap maximum profits from their use – the overwhelming majority of the original generations of radiation-mutated plant varieties came out of academic and government sponsored research, and thus were provided free and clear for farmers to use without restriction. With the chemical revolution of the mid-20th century, radiation-based mutations were followed by the use of chemical agents like the methyl sulfate family of mutagens. And after that, the crudest forms of organic genetic manipulation came into use, such as the uses of transposons, highly repetitive strands of DNA discovered in 1948 that can be used like biological duct tape to cover whole sections the genome. These modified crops stood up better to pests, lessened famines, reduced reliance on pesticides, and most of all enabled farmers to increase their effective yields. Coupled with the development of commercial machinery like tractors and harvesters, the rise of mutagenic breeding resulted in an agricultural revolution of a magnitude few truly appreciate. In the late 1800s, the overwhelming majority of global populations lived in rural areas, and most people spent their lives in agrarian pursuits. From subsistence farmers to small commercial operations, the majority of the population of every country, the US included, was employed in agriculture. Today, less than 2% of the American population (legal and illegal combined) works in farming of any kind. Yet we have more than enough food to feed all of our people, and a surplus to export to more densely populated nations like China and India. The result is that a sizable percentage of the world’s plant crops today – the ones on top of which much of the modern-era GMO experiments are done – are already genetic mutants. Hence the slippery slope that serves as the foundation of the resistance from regulators over the labeling of GM food products. Where do you draw the line on what to label? And frankly, how do you even know for sure, following the Wild-West days of blasting everything that could grow with some form or another of radiation, what plants are truly virgin DNA? The world’s public is largely unaware that many of the foods they eat today – far more than those targeted by anti-GMO protestors and labeling advocates – are genetically modified. Yet we don’t seem to be dying off in large numbers, like the anti-RNAi researchers project will happen. In fact, global lifespans have increased dramatically across the board in the last century. The Rise of Careful The science of GM food has advanced considerably since the dark ages of the 1920s. Previous versions of mutation breeding were akin to trying to fix a pair of eyeglasses with a sledgehammer – messy and imprecise, with rare positive results. And the outputs of those experiments were often foisted upon a public without any knowledge or understanding of what they were consuming. Modern-day GM foods are produced with a much more precise toolset, which means less unintended collateral damage. Of course it also opens up a veritable Pandora’s box of new possibilities (glow-in-the-dark corn, anyone?) and with it a whole host of potential new risks. Like any sufficiently powerful technology, such as the radiation and harsh chemicals used in prior generations of mutation breeding, without careful control over its use, the results can be devastating. This fact is only outweighed by the massive improvements over the prior, messier generation of techniques. And thus, regulatory regimes from the FDA to CSIRO to the European Food Safety Authority (EFSA) are taking increasing steps to ensure that GM foods are thoroughly tested long before they come to market. In many ways, the tests are far more rigorous than those that prescription drugs undergo, as the target population is not sick and in need of urgent care, and for which side effects can be tolerated. This is why a great many of the proposed GM foods of the last 20 years, including the controversial “suicide seeds” meant to protect the intellectual property of the large GM seed producers like Monsanto (which bought out Calgene, the inventor of that Flavr Savr tomato, and is now the 800-lb. gorilla of the GM food business), were never allowed to market. Still, with the 15 years from 1996 to 2011 seeing a 96-fold increase in the amount of land dedicated to growing GM crops and the incalculable success of the generations of pre-transgenic mutants before them, scientists and corporations are still in a mad sprint to find the next billion-dollar GM blockbuster. In doing so they are seeking tools that make the discovery of such breakthroughs faster and more reliable. With RNAi, they may just have found one such tool. If it holds true to its laboratory promises, its benefits will be obvious from all sides. Unlike previous generations of GMO, RNAi-treated crops do not need to be permanently modified. This means that mutations which outlive their usefulness, like resistance to a plague which is eradicated, do not need to live on forever. This allows companies to be more responsive, and potentially provides a big relief to consumers concerned about the implications of eating foods with permanent genetic modifications. The simple science of creating RNAi molecules is also attractive to people who develop these new agricultural products, as once a messenger RNA is identified, there is a precise formula to tell you exactly how to shut it off, potentially saving millions or even billions of dollars that would be spent in the research lab trying to figure out exactly how to affect a particular genetic process. And with the temporary nature of the technique, both the farmers and the Monsantos of the world can breathe easily over the huge intellectual-property questions of how to deal with genetically altered seeds. Not to mention the questions of natural spread of strains between farms who might not want GMO crops in their midst. Instead of needing to engineer in complex genetic functions to ensure progeny don’t pass down enhancements for free and that black markets in GMO seeds don’t flourish, the economic equation becomes as simple as fertilizer: use it or don’t. While RNAi is not a panacea for GMO scientists – it serves as an off switch, but cannot add new traits nor even turn on dormant ones – the dawn of antisense techniques is likely to mean an even further acceleration of the science of genetic meddling in agriculture. Its tools are more precise even than many of the most recent permanent genetic-modification methods. And the temporary nature of the technique – the ability to apply it selectively as needed versus breeding it directly into plants which may not benefit from the change decades on – is sure to please farmers, and maybe even consumers as well. That is, unless the scientists in Australia are proven correct, and the siRNAs used in experiments today make their way into humans and affect the same genetic functions in us as they do in the plants. The science behind their assertions still needs a great deal of testing. Much of their assertion defies the basic understanding of how siRNA molecules are delivered – an incredibly difficult and delicate process that has been the subject of hundreds of millions of dollars of research thus far, and still remains, thanks to our incredible immune systems, a daunting challenge in front of one of the most promising forms of medicine (and now of farming too). Still, their perspective is important food for thought… and likely fuel for much more debate to come. After all, even if you must label your products as containing GMO-derived ingredients, does that apply if you just treated an otherwise normal plant with a temporary, consumable, genetic on or off switch? In theory, the plant which ends up on your plate is once again genetically no different than the one which would have been on your plate had no siRNAs been used during its formative stages. One thing is sure: the GMO food train left the station nearly a century ago and is now a very big business that will continue to grow and to innovate, using RNAi and other techniques to come. The Casey Extraordinary Technology team has been tracking the leading lights of the RNAi medical industry for some time. Recently, one of our small biotech upstarts struck a potentially massive, exclusive deal with an agricultural giant to seed its own RNAi research program. Success could mean billions for both firms. If you’d like to know what company we believe will profit most from the next generation of GM food development, subscribe to CET. Bits & Bytes Last Chance for RIM? (CNN Money) Few companies have been written off as frequently as Research in Motion, whose Blackberry was once state of the art and which now finds itself fighting for its life. Its stock just soared 9% merely because it said release of the new Blackberry 10 is still on schedule for early next year. Whether the 10 will be able to put a dent into the Apple/Android monolith remains to be seen, but for RIM it could be the last, best hope. Giant Media Merger (LA Times) What do you get when you mate Han Solo with Minnie Mouse? We’re about to find out – fiscally, if not physically – with Tuesday’s announcement that Disney is acquiring Lucasfilm for a cool $4 billion. Disney is projecting it’ll get its money back within three years, while George is, well, retiring – as he is now well able to do. Google Settles Final AdWords Dispute (Ars Technica) Several companies have taken Google to court over AdWords, saying Google shouldn’t be allowed to key advertisements to their names, which are protected trademarks. The last and one of the most persistent has been Rosetta Stone, a language-software maker that sued Google in 2009, but lost in federal court. However, its case was revived on appeal, and yesterday it finally was settled on confidential terms. How Easy Is a Tablet to Use? (TechCrunch) Pretty damn easy, as it turns out. In a remarkable experiment, OLPC (One Laptop per Child) researchers in Ethiopia handed a Motorola Xoom tablet to each of a group of illiterate village children aged four to eight. Click the link to learn the amazing results.
U.S. Surgeon General Jerome Adams made a plea in April for more Americans to be prepared to administer naloxone, an opioid antidote, in case they or people close to them suffer an overdose.”The call to action is to recognize if you’re at risk,” Adams told NPR’s Rachel Martin. “And if you or a loved one are at risk, keep within reach, know how to use naloxone.”Nearly every state has made it easier for people to get naloxone by allowing pharmacists to dispense the drug without an individual prescription. Public health officials are able to write what are called standing orders, essentially prescriptions that cover everyone in their jurisdiction.Some states require training in how to use naloxone, typically given as a nasal spray called Narcan or with an EpiPen-like automatic injection, in order for someone to pick up naloxone. But the medicine is simple to use either way.After the surgeon general called for more people to be prepared with naloxone, we decided to ask Americans about their knowledge about the opioid antidote’s availability, attitudes toward using it and experience with the medicine in the latest NPR-IBM Watson Health Health Poll. The survey queried more than 3,000 households nationwide in May.We wondered how many people know about naloxone and the fact that someone doesn’t have to be a medical professional to administer it. Fifty-nine percent of respondents said they were aware of the antidote and that it could be given by laypeople; 41 percent said they weren’t.We then asked people who knew about naloxone if they would need a prescription to get it. The answers were pretty evenly divided among three options: yes, no and not sure/no response.”Why, with all the attention we’ve had in the media, why don’t more Americans know about naloxone?” asks Dr. Anil Jain, vice president and chief health information officer for IBM Watson Health. “When people did know, why did people think they needed a prescription?” While the survey doesn’t get at the causes, Jain says, the findings underscore the need for greater public awareness.Baltimore Health Commissioner Dr. Leana Wen says the lack of knowledge among Americans at large isn’t all that surprising. “Policy alone is necessary but not sufficient,” she says. “People still don’t know to go to the pharmacy to get access to naloxone, especially individuals at the highest risk.”To change that, she says, “you have to have continued education and the delivery of services” where people need them.In Baltimore, the health department maps where overdoses are happening and sends outreach workers to the areas. But money is an issue, even at a negotiated cost of $75 per naloxone kit, Wen says. There isn’t enough naloxone to go around. “Every week we take stock of how many naloxone kits we have for the rest of fiscal year,” she says. “Who’s at most risk? Those are who we give the naloxone to.”The NPR-IBM Watson Health Poll asked people if they would be willing to use Narcan, the nasal spray form of naloxone, to help a person who had overdosed. Fifty-eight percent said they would and 29 percent said no. Thirteen precent weren’t sure or didn’t respond. Only 47 percent of people 65 and older said they would be willing to do it.When asked about the auto-injector option, 68 percent of respondents said they would be willing to administer naloxone that way and 22 percent of people said they wouldn’t be.Finally, we asked whether people had obtained naloxone, and 10 percent said they or someone in their household had. Among those people, 81 percent said the naloxone had been used, but the sample size for this question was small, making interpretation difficult.The nationwide poll has an overall margin of error of plus or minus 1.8 percentage points. You can find the questions and full results here. Copyright 2018 NPR. To see more, visit http://www.npr.org/.
A disabled campaigner who is battling to protect the rights of wheelchair-users to travel on buses has won permission for his appeal to be heard by the Supreme Court.Doug Paulley (pictured) has been told by the court that it will hear his discrimination case – which is backed by the Equality and Human Rights Commission (EHRC) – against transport company First Bus.Paulley, from Wetherby, took the case against First Bus following an incident in February 2012.He had been planning to travel to Leeds, but was prevented from entering the bus because the driver refused to insist that a mother with a pushchair should move from the wheelchair space.He told Disability News Service that he was “relieved” and “really glad” about the Supreme Court’s decision, although it is unlikely to be heard until the latter part of 2016 at the earliest.He said: “It would have been a travesty if they had not [agreed to hear the appeal], given the huge support from lots of disabled people and that [the appeal] is being bank-rolled by the EHRC.”Paulley said the case wouldn’t have got so far “without so many disabled people sticking their necks out and campaigning around it and making it such a public issue”.And he said the case going to the Supreme Court would “certainly make a lot of people think and talk about it”.He said he now rarely used buses because of the effect the “extra layer of stress” caused by the incident – and the uncertainty he now feels when he uses a bus – had had on his mental health problems.Disabled campaigners were left “appalled” in December when three court of appeal judges found in favour of First Bus, and against Paulley.That judgement over-turned a county court ruling that wheelchair-users should have priority in the use of dedicated wheelchair spaces over parents with pushchairs, and that the “first come, first served” policy of First Bus breached the Equality Act.Instead, the court of appeal said that a bus driver needs only to request – and not demand – that a buggy-user vacates the space if it is needed by a wheelchair-user.A First Bus spokesman said today (Thursday): “The court of appeal decision in 2014 gave our customers, drivers and the wider industry much-needed clarification around the priority use of the wheelchair space on board buses.“The court’s judgment endorsed our current policy, which is to ask other passengers in the strongest polite terms to make way for wheelchair-users.“We note Mr Paulley has been given permission to appeal the court of appeal decision. We will continue to make the case that our current policy both complies with the law and remains the most practical solution for all concerned.”The importance of Paulley’s case was highlighted this week when it was mentioned several times in the first evidence session of a committee set up by the House of Lords to examine the impact of the Equality Act 2010 on disabled people.The Conservative peer Lord Northbrook was one of those who mentioned the Paulley case, when he questioned whether the law on service-providers’ duties to make reasonable adjustments for disabled people was “sufficiently precise”.Meanwhile, on the same day that the Supreme Court announced its decision, campaigners revealed that another transport company, National Express, had scrapped its own “first come, first served” policy on its buses, and replaced it with a wheelchair priority policy.The move followed a question asked by disabled athlete Susan Cook at the company’s annual general meeting (AGM) on 6 May, after which she had secured a meeting with two of the company’s top executives.Cook was supported by the user-led, campaigning charity Transport for All, and ShareAction, which helps people attend company meetings and raise issues with directors.Cook said: “It was great to use my power as a shareholder to secure a meeting with the company and persuade them to change their policy.“I’m glad National Express saw sense on this issue and I’m looking forward to going to more AGMs to raise disability rights issues in future.”Lianna Etkind, campaigns and outreach coordinator at Transport for All, said: “Being able to use public transport is an essential part of a full and active life, getting to work, having access to healthcare and education, or a social life, with freedom and independence.“We’re pleased National Express listened to reason and have decided to change their policy on wheelchair priority. It was great to work with Susan and ShareAction to bring about this change.”National Express has already announced plans to introduce a “turn up and go” service on its c2c train services in Essex from September, so disabled people needing assistance will be able to arrive at stations and have staff help them, without needing to book in advance.National Express will be the first private train company to offer this service in the UK and has also pledged to be the first train operator to make a route completely accessible.
Mike Hogan November 1, 2007 Next Article Interesting new PC shapes and concepts promise to accelerate our drive toward virtual computing. They’re desktops, portables, even memory sticks with names like FlipStart, iMac, MojoPac and Zonbu. Invariably thin and light, they’re not meant to operate as lone computing devices. Rather, they rely on the web for much of their functionality.Meet your new PC–the endpoint. The net has finally become the PC, an idea some superrich somebodies had a decade ago and lost a bunch of money on. So? PC users didn’t live with one foot in the virtual world then, and web infrastructure and computer subsystems weren’t anything like they are today.Component miniaturization, free open source software and logarithmic growth in web services combine in new PCs like the paperback-size Zonbu. Relying on the web for its hard drive, the 5-pound brick fits in the palm of your hand. But mobility isn’t its primary goal; Zonbu’s creators wanted to build a PC that was both cheap and green. Add your own keyboard, display and mouse to the CPU-only Zonbu, which costs as little as $99 with a two-year online storage contract (as low as $13 a month for 25GB). Zonbu’s Linux OS is housed on a 4GB CompactFlash card, along with 20 open source applications like the OpenOffice.org productivity suite and a Firefox browser.Instead of the standard 200-watt gulps, Zonbu takes 9-watt sips of electricity and is religiously carbon-emission neutral. Ohhhhm. But even we major carbon consumers can appreciate Zonbu’s complete silence (no hard drive or fan) and reduced levels of Windows’ hassle emissions. There’s no system configuration, license management, drive defragmentation or constant updating of multiple layers of malware protection–no Windows Mega-Patch Tuesdays!Only Skin-DeepBut Zonbu is a squat little box. If it’s style you’re looking for–and you have $1,200 to spend–where else to turn but Apple Inc.? Its newest line of iMacs are the sleekest desktops ever and will run Windows software. A CPU, hard drive and more are somehow poured into a 20- or 24-inch display balanced on a wire-thin L stand. Add Apple’s new wireless keyboard and mouse, and you have a computer that barely casts a shadow on your desk.Want web access to go? New feather-weights like the OQO model 02 and FlipStart can keep you connected wherever you roam at speeds of up to 1.4Mbps and 3.1Mbps, respectively. Weighing in at a pound and some change, each squeezes Bluetooth, Wi-Fi and even wide-area EV-DO into handhelds that measure less than 6 by 5 inches. They slide easily into a pocket or purse but include displays large enough for a full-size web browser. The heaviest part of either is the price tag: $1,299 and up.Still too much PC to carry? How about a USB 2.0 memory stick packing the new MojoPac virtual PC environment and a copy of your entire Windows desktop? Plug the MojoPac stick into any PC and compute from MojoPac’s secure environment without changing a single setting on the host. It’s very similar to the U3 environment with one critical difference: MojoPac works with the Microsoft Office Suite.Still too heavy for you? If you can lift a user ID and password, you can keep all your files on a virtual application site like Zoho. Log into its Microsoft Office-compatible suite from any broadband PC you can find.Traditional PCs aren’t going away– they’re just becoming terminals into that ultimate virtual PC in the sky: the web.Mike Hogan is Entrepreneur’s technology editor. Magazine Contributor This story appears in the November 2007 issue of Entrepreneur. Subscribe » PC functions are moving onto the web. Here are the tools you need to make the leap. Add to Queue The Web is the New PC Technology 3 min read –shares Free Webinar | July 31: Secrets to Running a Successful Family Business Learn how to successfully navigate family business dynamics and build businesses that excel. Register Now »
MomentFeed Reports “Customer Brand Loyalty is on Life Support” Business Wire5 days agoJuly 18, 2019 brand marketing mixJim D’ArcangeloMarketing Technology NewsMobile Brand SearchMomentFeedNews Previous ArticleFreshworks Named in Gartner’s 2019 Magic Quadrant for Sales Force AutomationNext ArticleOtter.ai Integrates With Dropbox to Collaborate, Manage, and Search Video and Audio Files Only 18 Percent of Consumers Search on Mobile for Brand NamesMomentFeed, a leader in software that connects global and national brands with consumers who use mobile when they are ready to buy locally, published a report that reinforces the value of local-mobile in the brand marketing mix.Only 18 percent of consumers search on mobile for brand names. Proximity, convenience, relevance, and quality are more important than ever.Although major brands are spending upwards of $70B in TV advertising and digital marketing spend is at 44 percent of budgets and on the upswing, location marketing strategy is often forgotten, especially by the largest of consumer brands. This can be deadly for companies that rely heavily on local stores and venues for the vast majority of sales.A new report called “The Rapid Death of Mobile Brand Search” highlights the facts and trends behind local-mobile search.Marketing Technology News: Fastbase New 3.0 Extension to Google Analytics Boosts Lead Generation to a New High Level for B2B companiesFor example, 82 percent of consumers search their phones for a product type (e.g., “nitro brew near me” rather than a specific branded coffee shop) and will often make choices based on Google ranking and proximity first.“I’ve met with many senior marketing executives who assume that because they’ve invested in search marketing and digital that they’ll automatically show up in mobile searches,” says Jim D’Arcangelo, MomentFeed CMO. “But to show up consistently in the ‘Google 3-Pack’ (top three listings), brands need to be hyper-focused on and savvy about their location marketing and review management and their impact on SEO. We’ve seen huge sales boosts among retailers and restaurants who focus on location marketing in addition to brand-building.”Marketing Technology News: Tapad and AdsWizz Partner to Enable Digital Identity Resolution Across Audio Ad CampaignsAdds Greg Sterling, VP of Strategy, Local Search Association, “Brands that aren’t devoting significant attention and resources to local and mobile marketing are not only falling behind competitors, they’re potentially losing revenue”Marketing Technology News: Icertis Cements Undisputed CLM Market Leadership with $115 Million Round
Reviewed by James Ives, M.Psych. (Editor)Jun 24 2019Micronutrient deficiencies, including vitamins B12 and D, as well as folate, iron, zinc and copper, are common in adults at the time of diagnosis with celiac disease. These deficiencies should be addressed at that time, according to a study by Mayo Clinic researchers.The retrospective study of 309 adults newly diagnosed with celiac disease at Mayo Clinic from 2000 to 2014 also found that low body weight and weight loss, which are commonly associated with celiac disease, were less common. Weight loss was seen in only 25.2% of patients, and the average body mass index was categorized as overweight. The study will appear in the July issue of Mayo Clinic Proceedings. Related StoriesTAU’s new Translational Medical Research Center acquires MILabs’ VECTor PET/SPECT/CTSwimming pools could be breeding grounds for diarrhea-causing germsOlympus Europe and Cytosurge join hands to accelerate drug development, single cell researchCeliac disease is an immune reaction to consuming gluten, a protein found in wheat, barley and rye. Eating gluten triggers an immune response in the small intestine that over time damages the intestine’s lining and prevents it from absorbing some nutrients, leading to diarrhea, fatigue, anemia, weight loss and other complications.Based on recent data, the prevalence of celiac disease in the U.S. is 1 in 141 people, and its prevalence has increased over the past 50 years.”Our study suggests that the presentation of celiac disease has changed from the classic weight loss, anemia and diarrhea, with increasing numbers of patients diagnosed with nonclassical symptoms,” says Dr. Bledsoe, the study’s primary author. “Micronutrient deficiencies remain common in adults, however, and should be assessed.” Assessment should include vitamin D, iron, folic acid, vitamin B12, zinc and copper.Zinc deficiency was observed most frequently at diagnosis, the study says, with 59.4% of patients having a deficiency. Other deficiencies included iron, vitamin D, copper, vitamin B12 and folate.The nutritional deficiencies have potential health ramifications, though in this retrospective study the clinical implications remain unknown. “Further studies are needed to better define the implications of the deficiencies, optimal replacement strategies and follow-up,” says Dr. Bledsoe. Source:Mayo Clinic It was somewhat surprising to see the frequency of micronutrient deficiencies in this group of newly diagnosed patients, given that they were presenting fewer symptoms of malabsorption.”Adam Bledsoe, M.D., a gastroenterology fellow at Mayo Clinic’s Rochester campus
Many people are suffering from autism, and we need early diagnosis especially in children. The current approaches to determining if someone has autism are not really child-friendly. Our method allows for the diagnosis to be made more easily and with less possibility of mistakes.The new technique can be used in all ASD diagnosis, but we believe it’s particularly effective for children.”Mehrshad Sadria, a master’s student in Waterloo’s Department of Applied Mathematics Reviewed by Alina Shrourou, B.Sc. (Editor)Jul 9 2019Researchers have developed a new technique to help doctors more quickly and accurately detect autism spectrum disorder (ASD) in children.In a study led by the University of Waterloo, researchers characterized how children with ASD scan a person’s face differently than a neuro-typical child. Based on the findings, the researchers were able to develop a technique that considers how a child with ASD gaze transitions from one part of a person’s face to another.According to the developers, the use of this technology makes the diagnostic process less stressful for the children and if combined with existing manual methods could help doctors better avoid a false positive autism diagnosis. In developing the new technique, the researchers evaluated 17 children with ASD and 23 neuro-typical children. The mean chronological ages of the ASD and neuro-typical groups were 5.5 and 4.8, respectively.Each participant was shown 44 photographs of faces on a 19-inch screen, integrated into an eye-tracking system. The infrared device interpreted and identified the locations on the stimuli at which each child was looking via emission and reflection of wave from the iris.The images were separated into seven key areas of interest (AOIs) in which participants focussed their gaze: under the right eye, right eye, under the left eye, left eye, nose, mouth and other parts of the screen. The researchers wanted to know more than how much time the participants spent looking at each AOI, but also how they moved their eyes and scan the faces. To get that information, the researchers used four different concepts from network analysis to evaluate the varying degree of importance the children placed on the seven AOIs when exploring the facial features.Related StoriesResearch reveals genetic cause of deadly digestive disease in childrenResearchers identify gene mutations linked to leukemia in children with Down’s syndromeNew network for children and youth with special health care needs seeks to improve systems of careThe first concept determined the number of other AOIs that the participant directly moves their eyes to and from a particular AOI. The second concept looked at how often a particular AOI is involved when the participant moves their eyes between two other AOIs as quickly as possible. The third concept is related to how quickly one can move their eyes from a particular AOI to other AOIs. The fourth concept measured the importance of an AOI, in the context of eye movement and face scanning, by the number of important AOIs that it shares direct transitions with.Currently, the two most favoured ways of assessing ASD involve a questionnaire or an evaluation from a psychologist.”It is much easier for children to just look at something, like the animated face of a dog, than to fill out a questionnaire or be evaluated by a psychologist,” said Anita Layton, who supervises Sadria and is a professor of Applied Mathematics, Pharmacy and Biology at Waterloo. “Also, the challenge many psychologists face is that sometimes behaviours deteriorate over time, so the child might not display signs of autism, but then a few years later, something starts showing up.”Our technique is not just about behavior or whether a child is focussing on the mouth or eyes. It’s about how a child looks at everything.” Source:University of Waterloo
Facebook’s Zuckerberg admits mistakes—but no apology (Update) At another point, the Facebook chief seemed to favor regulation for Facebook and other internet giants. At least, that is, the “right” kind of rules, such as ones requiring online political ads to disclose who paid for them. In almost the next breath, however, Zuckerberg steered clear of endorsing a bill that would write such rules into federal law, and instead talked up Facebook’s own voluntary efforts on that front.”They’ll fight tooth and nail to fight being regulated,” said Timothy Carone, a Notre Dame business professor. “In six months we’ll be having the same conversations, and it’s just going to get worse going into the election.”Even Facebook’s plan to let users know about data leaks may put the onus on users to educate themselves. Zuckerberg said Facebook will “build a tool” that lets users see if their information had been impacted by the Cambridge leak, suggesting that the company won’t be notifying people automatically. Facebook took this kind of do-it-yourself approach in the case of Russian election meddling, in contrast to Twitter, which notified users who had been exposed to Russian propaganda on its network.In what has become one of the worst backlashes Facebook has ever seen, politicians in the U.S. and Britain have called for Zuckerberg to explain its data practices in detail. State attorneys general in Massachusetts, New York and New Jersey have opened investigations into the Cambridge mess. And some have rallied to a movement that urges people to delete their Facebook accounts entirely.Sandy Parakilas, who worked in data protection for Facebook in 2011 and 2012, told a U.K. parliamentary committee Wednesday that the company was vigilant about its network security but lax when it came to protecting users’ data.He said personal data including email addresses and in some cases private messages was allowed to leave Facebook servers with no real controls on how the data was used after that.Paul Argenti, a business professor at Dartmouth, said that while Zuckerberg’s comments hit the right notes, they still probably aren’t enough. “The question is, can you really trust Facebook,” he said. “I don’t think that question has been answered.”Cambridge Analytica headquarters in central London was briefly evacuated Thursday as a precaution after a suspicious package was received. Nothing dangerous was found and normal business resumed, police said. But it’s far from clear whether he’s won over U.S. and European authorities, much less the broader public whose status updates provide Facebook with an endless stream of data it uses to sell targeted ads.On Wednesday, the generally reclusive Zuckerberg sat for an interview on CNN and several more to other outlets, addressing reports that Cambridge Analytica purloined the data of more than 50 million Facebook users in order to sway elections. The Trump campaign paid the firm $6 million during the 2016 election, although it has since distanced itself from Cambridge.Zuckerberg apologized for a “major breach of trust,” admitted mistakes and outlined steps to protect users following Cambridge’s data grab.”I am really sorry that happened,” Zuckerberg said on CNN. Facebook has a “responsibility” to protect its users’ data, he added, noting that if it fails, “we don’t deserve to have the opportunity to serve people.”His mea culpa on cable television came a few hours after he acknowledged his company’s mistakes in a Facebook post , but without saying he was sorry.Zuckerberg and Facebook’s No. 2 executive, Sheryl Sandberg, had been quiet since news broke Friday that Cambridge may have used data improperly obtained from roughly 50 million Facebook users to try to sway elections. Cambridge’s clients included Donald Trump’s general-election campaign. The offices of Cambridge Analytica (CA) in central London, after it was announced that Britain’s information commissioner Elizabeth Denham is pursuing a warrant to search Cambridge Analytica’s computer servers, Tuesday March 20, 2018. Denham said Tuesday that she is using all her legal powers to investigate Facebook and political campaign consultants Cambridge Analytica over the alleged misuse of millions of people’s data. Cambridge Analytica said it is committed to helping the U.K. investigation. (Kirsty O’Connor/PA via AP) Chief Executive of Cambridge Analytica (CA) Alexander Nix, leaves the offices in central London, Tuesday March 20, 2018. Cambridge Analytica, has been accused of improperly using information from more than 50 million Facebook accounts. It denies wrongdoing. (Dominic Lipinski/PA via AP) In this June 21, 2017, file photo, Facebook CEO Mark Zuckerberg speaks during preparation for the Facebook Communities Summit, in Chicago. Zuckerberg embarked on a rare media mini-blitz Wednesday, March 22, 2018, in the wake of a privacy scandal involving a Trump-connected data-mining firm. (AP Photo/Nam Y. Huh, File) Citation: Can Zuckerberg’s media blitz take the pressure off Facebook? (2018, March 22) retrieved 18 July 2019 from https://phys.org/news/2018-03-zuckerberg-media-blitz-pressure-facebook.html That audit will be a giant undertaking, said David Carroll, a media researcher at the Parsons School of Design in New York—one that he said will likely turn up a vast number of apps that did “troubling, distressing things.”But on other fronts, Zuckerberg carefully hedged otherwise striking remarks.In the CNN interview, for instance, he said he would be “happy” to testify before Congress—but only if it was “the right thing to do.” Zuckerberg went on to note that many other Facebook officials might be more appropriate witnesses depending on what Congress wanted to know. Explore further In the wake of a privacy scandal involving a Trump-connected data-mining firm, Facebook CEO Mark Zuckerberg embarked on a rare media mini-blitz in an attempt to take some of the public and political pressure off the social network. © 2018 The Associated Press. All rights reserved. Facebook shares have dropped some 8 percent, lopping about $46 billion off the company’s market value, since the revelations were first published.While several experts said Zuckerberg took an important step with the CNN interview, few were convinced that he put the Cambridge issue behind hm. Zuckerberg’s apology, for instance, seemed rushed and pro forma to Helio Fred Garcia, a crisis-management professor at NYU and Columbia University.”He didn’t acknowledge the harm or potential harm to the affected users,” Garcia said. “I doubt most people realized he was apologizing.”Instead, the Facebook chief pointed to steps the company has already taken, such as a 2014 move to restrict the access outside apps had to user data. (That move came too late to stop Cambridge.) And he laid out a series of technical changes that will further limit the data such apps can collect, pledged to notify users when outsiders misuse their information and said Facebook will “audit” apps that exhibit troubling behavior. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Nintendo has been on a winning streak, with its Switch console flying off the shelves since its launch last year © 2018 AFP Explore further Citation: Nintendo annual profits soar 36 percent to $1.27bn on Switch sales (2018, April 26) retrieved 18 July 2019 from https://phys.org/news/2018-04-nintendo-annual-profits-soar-percent.html Nintendo on Thursday said its annual net profit soared 36.1 percent, thanks to the immense popularity of its Switch console, and announced it was appointing a new president. Nintendo ups profit forecast on strong Switch sales This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Shuntaro Furukawa, 46, who currently oversees marketing and other divisions at the Kyoto-based video game giant, will succeed 68-old-year Tatsumi Kimishima, who has headed up the firm since 2015.Nintendo has been on a winning streak, with its Switch console flying off the shelves since its launch last year.The company said its net profit for the year to March reached 139.6 billion yen ($1.27 billion), beating its own expectations despite repeatedly raised annual targets.Its operating profit saw a six-fold increase to 177.6 billion yen, and its sales more than doubled from the previous year, to 1.056 trillion yen.Nintendo projected further improvements during the ongoing year to March 2019, forecasting annual net profit would improve 18.2 percent to 165 billion yen and operating profit would reach 225 billion yen, a 26.7 percent rise. Annual sales are expected to reach 1.2 trillion yen, up 13.7 percent.”The results for this fiscal year show a very positive trend in global hardware sales for Nintendo Switch, which sold a total of 15.05 million units during this fiscal year,” the company said in a statement.”On the software end, Super Mario Odyssey has been a major hit with audiences worldwide, and sold 10.41 million units,” it said, adding that Switch software sales reached 63.51 million units this fiscal year.Nintendo 3DS hardware sales remained solid even after the launch of Nintendo Switch, with sales during this fiscal year reaching 6.40 million units, the company said.
© 2018 The Associated Press. All rights reserved. A Harvard University forum is examining how a recent death linked to self-driving technology is causing concern about safety. Toyota suspends self-driving car tests after Uber death Explore further Citation: Harvard forum examining safety of self-driving vehicles (2018, May 4) retrieved 18 July 2019 from https://phys.org/news/2018-05-harvard-forum-safety-self-driving-vehicles.html In this Tuesday, Jan. 10, 2017, file photo, an autonomous vehicle is driven by an engineer on a street in an industrial park in Boston. Harvard University’s School of Public Health is holding a forum on Friday, May 4, 2018, to examine how recent deaths linked to self-driving technology are causing concern about safety, and raising questions about whether the field is moving too quickly. (AP Photo/Steven Senne, File) This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Friday’s panel discussion at Harvard’s T.H. Chan School of Public Health is exploring whether the field is advancing too quickly.Some experts are pointing to the March death of a pedestrian struck by a self-driving Uber vehicle in Tempe, Arizona, as cause for serious safety concern. It was the first death involving a fully autonomous test vehicle.Current federal regulations have few requirements specifically for self-driving vehicles, leaving it for states to handle.Participants will include Deborah Hersman, president and CEO of the National Safety Council, and John Leonard, vice president of research at the Toyota Research Institute. Toyota has been working with Uber on driverless systems.
Heather Lommatzsch claimed in the lawsuit filed Tuesday that Tesla salespeople told her in 2016 when she purchased the Model S that she could just touch the steering wheel occasionally while using the Autopilot mode. Lommatzsch, 29, said she tried to brake when she saw the stopped cars, but that the car’s brakes did not work.The accident happened May 11 in the Salt Lake City suburb of South Jordan. Lommatzsch broke her foot and was charged with a misdemeanor traffic citation for failure to keep a proper lookout. The firetruck’s driver suffered injuries but was not hospitalized.Tesla spokesman Dave Arnold said in a statement about the lawsuit that the company “has always been clear that Autopilot doesn’t make the car impervious to all accidents.””When using Autopilot, drivers are continuously reminded of their responsibility to keep their hands on the wheel and maintain control of the vehicle at all times,” Arnold said.Arnold stressed that Lommatzsch was cited and that the final police report said she told police she was looking at her phone before the crash. Car data showed Lommatzsch did not touch the steering wheel for 80 seconds before the crash, the report said.Data taken from her car showed it picked up speed for 3.5 seconds before crashing into the firetruck, the report said. The driver then manually hit the brakes a fraction of a second before the impact.Police suggested that the car was following another vehicle and dropped its speed to 55 mph (89 kph) to match the leading vehicle. They say the leading vehicle then likely changed lanes and the Tesla automatically sped up to its preset speed of 60 mph (97 kph) without noticing the stopped cars ahead. A Utah driver who slammed her Tesla into a stopped firetruck at a red light earlier this year while using the vehicle’s semi-autonomous function has sued the company, saying salespeople told her the car would stop on its own in Autopilot mode if something was in its path. In this May 11, 2018, file photo, released by the South Jordan Police Department shows a traffic collision involving a Tesla Model S sedan with a fire department mechanic truck stopped at a red light in South Jordan, Utah. Heather Lommatzsch, the Utah driver who slammed her Tesla into the stopped firetruck at a red light while using the vehicle’s semi-autonomous function, is suing the company. (South Jordan Police Department via AP, File) This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Citation: Utah driver sues Tesla after crashing in Autopilot mode (2018, September 5) retrieved 17 July 2019 from https://phys.org/news/2018-09-utah-driver-sues-tesla-autopilot.html All Teslas are equipped with automatic emergency braking, which Tesla says will detect objects and brake to help avoid or lessen impact of crashes. Tesla warns drivers to pay attention and not to rely on the system entirely.The National Transportation Safety Board recently issued initial findings about two separate crashes involving Tesla vehicles in which three people died.The agency found that a Tesla Model S electric car that crashed and burned last month in Florida, killing two teenagers, was traveling 116 mph (187 kph) three seconds before impact and only slowed to 86 mpg (138 kph) as the air bags were inflated.The agency said that a Tesla Model X SUV using Autopilot accelerated just before crashing into a California freeway barrier in March, killing its driver.The National Highway Traffic Safety Administration is still investigating the Utah crash and cannot yet make public details, said spokeswoman Kathryn Henry.A study released in August by the Insurance Institute for Highway Safety found that cars and trucks with electronic driver-assist systems may not see stopped vehicles and could even steer a driver into a crash if the driver is not paying attention. The paper, titled “Reality Check,” issued the warning after testing five of the systems from Tesla, Mercedes, BMW and Volvo on a track and public roads. The upshot is while they could save your life, the systems can fail under many circumstances. Lommatzsch claimed she has suffered serious physical injuries that have deprived her of being able to enjoy life and led to substantial medical bills. She is seeking at least $300,000 in damages.The Utah crash is one of several Tesla accidents that have brought scrutiny to its Autopilot, the company’s semi-autonomous system designed to keep a vehicle centered in its lane at a set distance from cars in front of it. The system also can also guide the cars to change lanes automatically. Explore further © 2018 The Associated Press. All rights reserved. Tesla in Autopilot mode sped up before crashing In this May 11, 2018, file photo, released by the South Jordan Police Department shows a traffic collision involving a Tesla Model S sedan with a fire department mechanic truck stopped at a red light in South Jordan, Utah. Heather Lommatzsch, the Utah driver who slammed her Tesla into the stopped firetruck at a red light while using the vehicle’s semi-autonomous function, is suing the company. (South Jordan Police Department via AP, File)