Stories
Slash Boxes
Comments

Dev.SN ♥ developers

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2020-01-01 to 2020-06-30
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$3500.00
100.0%
Stretch Goal:
$2000.00

Currently:
$1254.52
62.7%

Covers transactions:
2020-01-01 00:00:00 ..
2020-06-30 21:00:33 UTC
(SPIDs: [1207..1407])
Last Update:
2020-07-01 02:02:58 UTC
--martyb


Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Poll

How often do you click through and read the fine article?

  • Almost all the time
  • More often than not
  • Less often than do
  • When the topic interests me
  • Very rarely
  • Never - it would go against long-standing traditions!
  • Click what?

[ Results | Polls ]
Comments:0 | Votes:2

Site Funding Progress

Funding Goal
For 6-month period:
2020-01-01 to 2020-06-30
(All amounts are estimated)
Base Goal:
$2000.00

Currently:
$126.74
6.4%

Covers transactions:
2020-01-01 00:00:00 ..
2020-01-31 06:46:05 UTC
(SPIDs: [1207..1216])
Last Update:
2020-01-31 12:48:47 UTC
--martyb

posted by martyb on Saturday February 13 2021, @05:54PM   Printer-friendly

Title: Parsing C++ Is Literally Undecidable (2013)

--- --- --- --- Entire Story Below - Must Be Edited --- --- --- --- --- --- ---

Arthur T Knackerbracket has found the following story:

Many programmers are aware that C++ templates are Turing-complete, and this was proved in the 2003 paper C++ Templates are Turing Complete.

However, there is an even stronger result that many people are not aware of. The C++ FQA has a section showing that parsing C++ is undecidable, but many people have misinterpreted the full implications of this (understandable, since the FQA is discussing several issues over the course of its questions and does not make explicit the undecidability proof).

Some people misinterpret this statement to simply mean that fully compiling a C++ program is undecidable, or that showing the program valid is undecidable. This line of thinking presumes that constructing a parse tree is decidable, but only further stages of the compiler such as template instantiation are undecidable.

For example, see this (incorrect, but top-voted) Stack Overflow answer to the question What do people mean when they say C++ has “undecidable grammar”? This answer errs when it says: “Note this has nothing to do with the ambiguity of the C++ grammar.”

In fact, simply producing a parse tree for a C++ program is undecidable, because producing a parse tree can require arbitrary template instantiation. I will demonstrate this with a short program, which is a simplification/adaptation of what is in the FQA link above.

The parse tree for this program depends on whether TuringMachine::output is SomeType or not. If it is SomeType then ::name is an integer and the parse tree for the program is multiplying two integers and throwing away the result. If it is not SomeType, then ::name is a typedef for int and the parse tree is declaring a pointer-to-int named x. These two are completely different parse trees, and the difference between them cannot be delayed to further stages of the compiler.

The parse tree itself depends on arbitrary template instantiation, and is therefore the parsing step is undecidable.

In practice, compilers limit template instantiation depth, so this is more of a theoretical problem than a practical one. But it is still a deep and significant result if you are ever planning on writing a C++ parser.

Parsing, performance, and low-level programming.


Original Submission

posted by martyb on Tuesday December 29 2020, @10:07PM   Printer-friendly

Entire Boston Dynamics robot line-up dances in the new year:

Boston Dynamics is sending off 2020 with its most impressive robot video to date – showing off its entire range dancing to the classic song “Do You Love Me?”. The fun video offers the first glimpse at two Atlas robots working together while also highlighting just how quickly this technology is developing.

Back in 2018 Boston Dynamics released a cute video of its dog-like Spot robot dancing to “Uptown Funk”. The playful video was a fun little demonstration of Spot’s broad range of movements, exciting at the time but very simplistic looking back from the vantage of today. Now the company has stepped things up delivering a long choreographed dance video featuring not only Spot, but two Atlas robots and a special appearance from Handle, a wheeled model.

Released as a kind of New Year’s gift from the company, the video is the first look at two Atlas humanoid robots working together. Atlas, still technically a prototype robot, has demonstrated a stunningly rapid evolution over the past decade from barely being able to walk in 2013, to being allowed to roam tetherless in 2015, completing a spectacular parkour routine just three years later, and finally getting acrobatic last year.


Original Submission

posted by requerdanos on Wednesday December 23 2020, @01:12AM   Printer-friendly
from the ghost-in-the-machine dept.

Exploring the potential of near-sensor and in-sensor computing systems:

As the number of devices connected to the internet continues to increase, so does the amount of redundant data transfer between different sensory terminals and computing units. Computing approaches that intervene in the vicinity of or inside sensory networks could help to process this growing amount of data more efficiently, decreasing power consumption and potentially reducing the transfer of redundant data between sensing and processing units.

Researchers at Hong Kong Polytechnic University have recently carried out a study outlining the concept of near-sensor and in-sensor computing. These are two computing approaches that enable the partial transfer of computation tasks to sensory terminals, which could reduce power consumption and increase the performance of algorithms.

"The number of sensory nodes on the Internet of Things continues to increase rapidly," Yang Chai, one of the researchers who carried out the study, told TechXplore. "By 2032, the number of sensors will be up to 45 trillion, and the generated information from sensory nodes is equivalent to 1020 bit/second. It is thus becoming necessary to shift part of the computation tasks from cloud computing centers to edge devices in order to reduce energy consumption and time delay, saving communication bandwidth and enhancing data security and privacy."

[...] So far, the work by Chai and his colleagues primarily focused on vision sensors. However, near-sensor and in-sensor computing approaches could also integrate other types of sensors, such as those that detect acoustic, pressure, stain, chemical or even biological signals.

Journal References:
1.) Feichi Zhou, Yang Chai. Near-sensor and in-sensor computing, Nature Electronics (DOI: 10.1038/s41928-020-00501-9)
2). Feichi Zhou, Zheng Zhou, Jiewei Chen, et al. Optoelectronic resistive random access memory for neuromorphic vision sensors, Nature Nanotechnology (DOI: 10.1038/s41565-019-0501-3)
3.) Yang Chai. In-sensor computing for machine vision, Nature (DOI: 10.1038/d41586-020-00592-6)


Original Submission

posted by requerdanos on Sunday December 20 2020, @07:56PM   Printer-friendly
from the wintel-no-more? dept.

Microsoft may be developing its own in-house ARM CPU designs:

This afternoon, Bloomberg reported that Microsoft is in the process of developing its own ARM CPU designs, following in the footsteps of Apple's M1 mobile CPU and Amazon's Graviton datacenter CPU.

Bloomberg cites off-record conversations with Microsoft employees who didn't want to be named. These sources said that Microsoft is currently developing an ARM processor for data center use and exploring the possibility of another for its Surface line of mobile PCs.

[...] Even if Bloomberg's report proves 100 percent accurate, the end result is likely to follow Amazon's lead much more closely than Apple's. Although Amazon tightened its supply chain by producing its own Graviton hardware, its software ecosystem remains open—without solid Linux operating system support, a server's future in a data center is very poor indeed. Microsoft would face the same challenges with a data center-focused product, and for the same reasons—although the "less likely" Surface ecosystem would be considerably less constrained.


Original Submission

posted by requerdanos on Sunday December 20 2020, @03:11PM   Printer-friendly

Plants can be larks or night owls just like us:

Plants have the same variation in body clocks as that found in humans, according to new research that explores the genes governing circadian rhythms in plants.

[...] These rhythmic patterns can vary depending on geography, latitude, climate and seasons - with plant clocks having to adapt to cope best with the local conditions.

[...] To investigate the genetic basis of these local differences, [researchers at the Earlham Institute and John Innes Centre in Norwich] examined varying circadian rhythms in Swedish Arabidopsis plants to identify and validate genes linked to the changing tick of the clock.

Dr Hannah Rees, a postdoctoral researcher at the Earlham Institute and author of the paper, said: “A plant’s overall health is heavily influenced by how closely its circadian clock is synchronised to the length of each day and the passing of seasons. An accurate body clock can give it an edge over competitors, predators and pathogens.

“We were interested to see how plant circadian clocks would be affected in Sweden; a country that experiences extreme variations in daylight hours and climate. Understanding the genetics behind body clock variation and adaptation could help us breed more climate-resilient crops in other regions.”

[...] "It’s amazing that just one base-pair change within the sequence of a single gene can influence how quickly the clock ticks," explained Dr Rees.

(Emphasis from original retained.)

Journal Reference:
Hannah Rees, Ryan Joynson, James K.M. Brown, et al. Naturally occurring circadian rhythm variation associated with clock gene loci in Swedish Arabidopsis accessions, Plant, Cell & Environment (DOI: 10.1111/pce.13941)


Original Submission

posted by martyb on Friday December 18 2020, @11:05AM   Printer-friendly

teste 0

soylentnews.org:

Ever wonder why a pizza made in your home oven doesn't taste as good as one made in a brick oven? You're not the only one. Some researchers think they've figured it out :

They started off interviewing pizzaiolos , or pizza makers, in Rome who were masters of the Roman style of pizza. For this, the bake lasts 2 minutes at 626 degrees Fahrenheit. (Neapolitan pizzas usually bake at an even higher temperature — at least 700 degrees.) That turns out a "well-baked but still moist dough and well-cooked toppings," Glatz says. The same settings in a conventional steel oven produce far less ideal results. "You burn the dough before the surface of the pizza even reaches boiling, so this is not a product you will want to eat," he says.

The story goes on to note that the temperature conductivity of a metal oven is much greater than a brick oven, leading to burning of the crust. Adjusting with a lower temperature fails as it then leaves a dried-out crust and toppings. Accommodations with a pizza stone, oil, and a broiler can help, but cannot entirely mitigate the difference.

When I was in college the original Battlestar Galactica television series came out. We would gather in an upperclassman's dorm room and watch the show on a 13-inch TV. This was followed immediately by a trip to the local Rathskeller and an order for what we called a "death star" pizza... "double loaded extra everything, no guppies" (i.e. anchovies). That and a couple of pitchers of beer was a fine way to wrap up a Sunday.

What are your favorite toppings? Alternatively, are there any toppings you think should never be put on a pizza (such as pineapple)?

Abortion rights advocates are exploring how technology might preserve or even expand women's access to abortion if the Supreme Court scales back Roe v. Wade . A nonprofit group is testing whether it's safe to let women take abortion pills in their own homes after taking screening tests and consulting with a doctor on their phones or computers. Because the study is part of an FDA clinical trial, the group isn't bound by current rules requiring the drugs be administered in a doctor's office or clinic.

The group, called Gynuity Health Projects, is carrying out the trial in five states that already allow virtual doctors to oversee administration of the abortion pill, and may expand to others. If the trial proves that allowing women to take the pill at home is safe — under a virtual doctor's supervision — the group hopes the FDA could eventually loosen restrictions to allow women to take pills mailed to them after the consult. If FDA took that step, it could even help women in states with restrictive abortion laws get around them, potentially blurring the strict boundaries between abortion laws in different states if — as is likely — the Senate confirms a high court justice who is open to further limits on Roe .

"Flop accounts bring attention to bad things or bad people that people should be aware of. We also post cringeworthy content for entertainment purposes," said Alma, a 13-year-old admin on the flop account @nonstopflops.

According to teens, flop accounts began as a way to make fun of celebrities and popular YouTubers, but sometime over the past year they've morphed into something more substantive: a crucial way to share and discuss opinions online.

"Content [on flop accounts] is centralized around things that we think are factually or morally wrong, and it's how we critique them," said Taylor, a 15-year-old in Illinois who is an admin on a flop account. "Today, for instance, I posted a flop that was this lady making fun of someone for being homeless. That's a horrible thing to do."

The main thing teens who engage with flop accounts share is a strong distrust of the news media. Teens said they turned to flop accounts specifically because they didn't believe what they read in the news, saw on TV, or even were taught in their U.S.-history class, since, as one teen saw it, their teacher is just one person giving an opinion. Teen flop-account admins and followers said they found information on flop accounts to be far more reliable because it could be crowdsourced and debated.

Protons might be the Large Hadron Collider 's bread and butter, but that doesn't mean it can't crave more exotic tastes from time to time. On Wednesday, 25 July, for the very first time, operators injected not just atomic nuclei but lead "atoms" containing a single electron into the LHC. This was one of the first proof-of-principle tests for a new idea called the Gamma Factory, part of CERN's Physics Beyond Colliders project.

"We're investigating new ideas of how we could broaden the present CERN research programme and infrastructure," says Michaela Schaumann, an LHC Engineer in Charge. "Finding out what's possible is the first step."

During normal operation, the LHC produces a steady stream of proton–proton collisions, then smashes together atomic nuclei for about four weeks just before the annual winter shutdown. But for a handful of days a year, accelerator physicists get to try something completely new during periods of machine development. Previously, they accelerated xenon nuclei in the LHC and tested other kinds of partially stripped lead ions in the SPS accelerator.

[...] Physicists are doing these tests to see if the LHC could one day operate as a gamma-ray factory. In this scenario, scientists would shoot the circulating "atoms" with a laser, causing the electron to jump into a higher energy level. As the electron falls back down, it spits out a particle of light. In normal circumstances, this particle of light would not be very energetic, but because the "atom" is already moving at close to the speed of light, the energy of the emitted photon is boosted and its wavelength is squeezed (due to the Doppler effect).


Original Submission

posted by requerdanos on Friday December 18 2020, @11:02AM   Printer-friendly
from the bursting-with-potential dept.

Contrary to appearances, this story had not really been seconded.--Bytram

Scientists see a new kind of explosion on the sun:

NASA said on December 17, 2019, that its Solar Dynamics Observatory (SDO) observed a kind of magnetic explosion on the sun that scientists had never seen before. The spacecraft spied the explosion when a prominence — a large loop of material launched by an eruption on the sun’s surface — started falling back to the surface. Before it reached the surface, the prominence ran into a snarl of magnetic field lines, sparking a magnetic explosion. A statement from NASA explained:

Scientists have previously seen the explosive snap and realignment of tangled magnetic field lines on the sun – a process known as magnetic reconnection – but never one that had been triggered by a nearby eruption. The observation, which confirms a decade-old theory, may help scientists understand a key mystery about the sun’s atmosphere, better predict space weather, and may also lead to breakthroughs in the controlled fusion and lab plasma experiments.

So the new kind of magnetic explosion – called forced magnetic reconnection – wasn’t entirely unexpected, but it’s been theoretical until now. This sort of explosion was first theorized 15 years ago.

Journal Reference:
On the Observations of Rapid Forced Reconnection in the Solar Corona - IOPscience, The Astrophysical Journal (DOI: 10.3847/1538-4357/ab4a0c)


Original Submission