This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

Monday, June 10, 2019

Google's Quest to Build the Perfect One-Tap Smartphone Camera

Google's Quest to Build the Perfect One-Tap Smartphone Camera



Across all three generations, the cameras on Google’s Pixel are extraordinary in their simplicity. You don’t get much in the way of manual controls, and even as competitors like Samsung, Apple, Huawei, and others have added more and more sensors to the backs of their phones, the Pixel 3 and 3a have held firm with just a single rear camera.

On top of that, if you check out the specs for the Pixel 3's camera like its 12-MP resolution and f/1.8 aperture, those figures don’t exactly standout compared to specs on other phones—there’s no 48-MP sensor or f/1.5 aperture here. And yet, when it comes to the kind of photos a Pixel can produce, the image quality you get from Google’s latest phones is often unmatched.

This gap between the Pixel’s specs and the results it puts out is something that stands in opposition to traditional smartphone camera development, which typically results in device makers trying to cram bigger lenses and sensors into their gadgets. So to find out more about Google’s innovative approach to making your cat photos (and everything else) look better, I spoke to Marc Levoy, a distinguished engineer at Google, and Isaac Reynolds, a product manager for the Pixel camera team, who are two of the leaders driving the development of Google’s photography efforts. You can watch highlights from my interview in the video above.

So what’s the other part of the formula for capturing high-quality pictures? Software, driven largely through techniques collectively known as computational photography. That said, Levoy was quick to point out that the field of computational photography is much bigger than just what Google is doing, but in short, it amounts to the process of using software and computers to manipulate a photo—or more often a series of photos—to create a final image that looks significantly better than the originals.

This is the principle behind the Pixel’s HDR+ camera mode, which takes multiple photos at different exposures and then combines them to preserve shadows and details better, while also enhancing things like resolution and high dynamic range. The use of computational photography even helps define “the look” of photos shot by a Pixel phone, because unlike other smartphone cameras, Levoy claims that the Pixel camera will rarely blow out highlights.

Sometimes, that means a Pixel photo might look underexposed, but in scenes like the one above, while the Galaxy S10's shot is generally brighter and arguably more pleasing to the eye, it lacks a lot of detail in the sunset, which for me, was the whole reason why I snapped the pic in the first place.

Better-looking photos aren’t the only benefit of Google’s software-first approach to photography. It also makes the Pixel’s camera app easier to use. That’s because as powerful as Google’s software is, it’s not a big help if it’s so complicated no can use it.

Levoy explained that this balance creates a sort of creative tension, where after demoing a potential new feature to the Pixel team, the challenge becomes how to build it into the camera’s functionality so that a user doesn’t need to think about it to get results.

Night Sight is an excellent example of this because once you turn it on, there are no other settings you need to mess with. You just enable Night Sight and ap the shutter button. That’s it. Meanwhile, in the background, the Pixel will evaluate the amount of available light and use machine learning to measure how steady your hands are. This information is then used to determine how low to set the camera’s shutter speed, how many frames the camera needs to capture, and other settings to create the best possible image.

This streamlined approach to photography has its trade-offs, especially if you’re used to the traditional controls you might find in a DSLR or fancy mirrorless camera. Unlike camera apps on other phones, the Pixel doesn’t offer manual controls for setting things like shutter speed, exposure compensation, or ISO. This balance between high-quality results and user control is something the Pixel camera team constantly struggles with.

In the end, Reynolds summed it up by saying “If you could build a user interface that perfectly took that complexity—those three tap processes—and put them where they wouldn’t affect the one-tap user, absolutely. That sounds fantastic. But it’s impossible to actually hide those things way down under the hood like that. If you try to add a use case that takes three taps, you’re going to compromise the one tap. ” This is why when push comes to shove, Google always comes back to its one-tap mantra.

As a counterpoint, Reynolds pointed out that while other phones come with pro modes that allow people to tweak camera controls, typically, as soon as you switch out of auto and into manual, you lose a lot of the extra processing and AI-assisted photo enhancements companies like Huawei and Samsung have been adding to their handsets. The results of more control frequently aren’t better than leaving it all to the computer.

But perhaps the most significant advantage of computational photography may be for the average person who only buys a new phone every two or three years. Since much of the magic inside a Pixel’s camera rests in software, it’s much easier to port features like Night Sight and Super Res Zoom, which first made their debut on the Pixel 3, to older devices including both the Pixel 2 and the original Pixel.
Illustration for article titled Google's Quest to Build the Perfect One-Tap Smartphone Camera
Photo: Sam Rutherford (Gizmodo)

This also comes into play on lower-priced devices like the $400 Pixel 3a, because despite costing half the price of a standard Pixel 3, it delivers essentially the same high-end image quality. And in a somewhat surprising move, the newest addition to the Pixel camera—a new hyper-lapse mode—was first announced on the Pixel 3a before making its way to the rest of the Pixel family.

Sadly, when I asked about what might be the next feature heading to the Pixel camera, Levoy and Reynolds were a bit cagey. Personally, as impressive as the Pixel’s camera is, I still often find myself wondering what Google could do if the next Pixel had dual rear cams—perhaps one with an optical zoom. After all, the Pixel 3 does have two cameras in front for capturing standard and ultra-wide angle shots. I guess we’ll have to wait and see.
Amazon's home surveillance company Ring is using video captured by its doorbell cameras in Facebook advertisements that ask users to identify and call the cops on a woman whom local police say is a suspected thief.

In the video, the woman’s face is clearly visible and there is no obvious criminal activity taking place. The Facebook post shows her passing between two cars. She pulls the door handle of one of the cars, but it is locked.

The video freezes on a still of the woman’s face from two different angles: “If you recognize this woman, please contact the Mountain View Police Department … please share with your neighbors,” text superimposed on the video says. In a post alongside the video, Ring urges residents of Mountain View, California to contact the police department if they recognize her:

Do you live in 94040 (or nearby)? Mountain View Residents: Do you recognize this woman? On May 22, this woman was caught on camera breaking into a vehicle at a Mountain View home near Castro St and Miramonte Ave. If you have any information on the whereabouts of this woman, please contact the Mountain View Police Department at 650-903-6344 (Case Number: 19-3742). And please share this post, so we can all stay alert.

The post is popping up on some people's feeds as sponsored, as it did on Jon Hendren's Facebook feed:

Sponsored posts are advertisements that are paid for by the company, and typically are targeted to a specific audience. Hendron lives in the Mountain View area. Ring confirmed to Motherboard that it did sponsor this post.

Amazon purchased Ring in 2018. The company sells surveillance camera systems, and recently filed two patent applications for facial recognition technology in its cameras that would automatically alert law enforcement to "suspicious" people. Its “Neighbors” app has become a de-facto private neighborhood watch, in which people who own Ring surveillance cameras (and others who simply have the app) discuss "suspicious" activity in their communities.

A post on the the Mountain View Police Department's websites details the incident and also shares an image from the Ring camera. "Footage obtained from a neighbor’s home captured a woman who is believed to be the suspect in the theft," the post says. The woman is suspected of stealing someone's purse and wallet from inside a car, and making a series of purchases around town with those stolen credit cards.

A spokesperson for MVPD told Motherboard in an email that "while we did not ask Ring to post footage, the additional outreach, and the additional eyes that may see this woman and recognize her, are most welcome and helpful!" A spokesperson for Ring told Motherboard in an email that its Facebook post encourages communities to work with local cops to "help keep neighborhoods safe."

"Alerts are created using publicly posted content from the Neighbors app that has a verified police report case number,” the company said. “We get the explicit consent of the Ring customer before the content is posted, and utilize sponsored, geotargeted posts to limit the content to relevant communities."

Police departments request information from private companies all of the time. But Ring's call to action shows how Amazon advertises Ring as a vigilante extension of law enforcement.

Read more: Amazon's Home Security Company Is Turning Everyone Into Cops

Ring is also using the image of a woman who is innocent until proven guilty and calling her a thief in ad that it's paying to get in front of a targeted audience in order to sell more home surveillance equipment. The company doesn't claim to know for certain that she's committed a crime, and the police have yet to catch or convict anyone on this case. Amazon has also sold Rekognition, its facial recognition software, to cops around the country, which is worth keeping in mind as the company sells internet-connected residential surveillance cameras.

This isn't the first time Ring has worked with the cops: Publicly-available content on the Neighbors app, a social platform for Ring users that acts as a virtual "neighborhood watch," is open for law enforcement to peruse. Cops can request content from users about locations, dates, and time frames of specific incidents. Users get push notifications from the police about crimes in their area, and view where it happened on an interactive map.

Previously, Motherboard has found that a lot of what happens on Neighbors to be predictably ugly. Racial profiling abounds on the platform, coded as public safety and efforts toward the greater good. When someone reports a crime on Neighbors and says they've filed a police report, a Ring employee will sometimes reply to encourage that user to share the case number and officer's contact information. Once they have a case number, they claim, Ring can use the video to help law enforcement.

The Intercept has also reported that Ring built a portal within Neighbors for law enforcement to request and access camera footage and talk with users directly about cases.

Ring plays into people's mistrust of strangers. The company's official Twitter feed is a mix of cute animal videos and goofy moments caught at the front door, but that's not why people purchase surveillance systems. Posting an un-anonymized video of a person suspected of a crime—not a convicted criminal, but a still-innocent person in the eyes of the law—and then imploring users to snitch, is another step toward what Amazon's really selling: an always-on surveillance state as a way to placate our fears
.

Apple asks applications to provide the option to "log on using Apple" if they use competing services

Apple asks applications to provide the option to "log on using Apple" if they use competing services




One of the new features of iOS 13 is the ability of applications to offer the option of "login using Apple". This feature is primarily a response to login services provided by other companies such as Facebook and Google, but Apple confirms that its feature will provide more privacy to users since it does not collect data about the user who chooses to use this option.

In a publication on its official developer site, Apple provided additional details about the upcoming "Apple TV sign-in" feature, including adding it as an option in applications offering competing services. This means that if the application allows you to sign in with Facebook or Google, this application will need to add the option to sign in using Apple as well.

Apple-ios-13-sign-in-screen-iphone-xs-06032019

"Apple's sign-in option will be available for testing this summer," said Apple. This option will be provided to users in applications that support third-party login when it is commercially available later this year. " We are certain that some developers are not thrilled about having to include this feature in their applications, but Apple's decision will help them to accelerate the adoption of the "Log on using Apple" option quickly.

Overall, the beta version of iOS 13 is now available to developers, and the official and final version of the system is expected to be released in the late third quarter of this year, most likely alongside the new iPhone.

The date of the official announcement of the phone Galaxy Note 10

The date of the official announcement of the phone Galaxy Note 10



Samsung officially unveiled the Galaxy Note 9 on the ninth of August last year, so it was expected that the company will officially unveil the Galaxy Note 10 in the same month of this year also. If the information we received today from South Korea is correct, we will see Samsung officially unveiling the Galaxy Note 10 on August 10.

However, Samsung will launch two phones in the Galaxy Note Series this year, namely Galaxy Note 10 and Galaxy Note 10 Pro.

We've recently seen some computer images that show us how both phones look like it turns out that both phones will come in the same design, but the Galaxy Note 10 Pro will have a slightly larger screen and an extra camera at the back.

These computer images also confirmed the authenticity of earlier rumors that Samsung will remove the 3.5mm headphone jack on the new Galaxy Note 10, along with the Bixby Bumper button, which has received a lot of criticism in the past.

Galaxy Note 9 was unveiled in August last year and was sold on August 24, so it is possible that Samsung will start selling Galaxy Note 10 and Galaxy Note 10 Pro on August 25 if it decides to disclose it officially. On the 10th of August.

iOS system 13 will turn the iPhone into a portable Playstation 4

iOS system 13 will turn the iPhone into a portable Playstation 4



With the announcement of iOS 13, it was revealed that Apple will finally support the DualShok 4 handsets from Sony to the Xbox 360's Playstation 4 and Xbox Controller. However, what many people do not know is that it will have a greater impact than we think.

Sony had an application called "Sony Remote Play" launched in March. In case you have not heard of this application before, what it does basically is that it allows you to broadcast games from your Playstation 4 to your iPhone via Wi-Fi. This turns your iPhone into a portable gaming device, but unfortunately, virtual on-screen controls as well as third-party hands do not offer an excellent gaming experience.

Now that Apple has added support for Sony DualShok 4 to iOS 13, the Sony Remote Play application will be more useful because players will be able to play their Playstation 4 games on their iPhone the same way they play games on gaming devices Their cottage.

However, before you feel excited, keep in mind that all Playstation 4 games are not compatible with Sony Remote Play, and the fact that games can only be broadcast on the same WiFi network means you can not take your iPhone abroad and play your Playstation 4 games on the go , But it is still a fairly fantastic feature.

Google wants the US government to lift the ban imposed on Huawei

Google wants the US government to lift the ban imposed on Huawei

 

As the US government officially listed Huawei as a list of companies banned from dealing with US companies, Huawei lost support such as Google. However, Huawei appears to have found an unexpected ally at Google because, according to a recent report from the Financial Times, Google seems to be pressing the US government to lift the ban on Huawei.

According to the report, Google seems to feel that removing Huawei from the Android system could lead to bad national security news. This is because Google believes that through the fragmentation of Android, the operating system will become weak, which will allow foreign players to enter it.

Huawei is widely expected to create its own operating system, which some believe will still be based on the AOSP, which will essentially release a branching version of Android that will not be supported or protected by the Google Play Store security features And security updates that are released by Google every month.

The argument is that if someone with a secure Android phone sends something to a Huawei machine with a hacked Android system, it could steal the data that was supposed to be encrypted. However, we still have to wait to see whether the US government will be persuaded by Google's argument, but for now, Huawei has been given a 90-day deadline for the ban to take effect.

International Space Station opens its doors to tourists

International Space Station opens its doors to tourists

 

NASA plans to organize "tourist" flights to the International Space Station starting in 2020, but a one-person trip will cost a heavy price and impose several tough conditions.

The ISS is still the domain of space scientists representing state space agencies, meaning that private agencies can not send one to the scientific facility.

But NASA said in a statement on Sunday that the international station would open its doors to commercial flights, meaning private companies would be able to organize flights to the station.

The statement added that the agency may allow two flights a year, but a short period, to allow access to the International Space Station.

The flights will be funded in particular, and the transfer to the station will be through Boeing and SpaceX, which are currently developing two capsules that can carry people towards the ISS.

According to preliminary estimates, the transfer of one person is expected to cost about $ 50 million, but this number could rise further.

"Fifth Generation" begins with the invasion of the United Kingdom

"Fifth Generation" begins with the invasion of the United Kingdom



 Three mobile phone services in the United Kingdom said it would launch its first 3G broadband service in London in August and would cover 25 cities and towns before the end of the year.
Three, owned by Hutchison, said it would launch the fifth-generation broadband network in London, joining BT and IE. And Vodafone to launch the service in 2019.

The investment of the fifth generation network infrastructure, costing 2 billion pounds (2.55 billion dollars), included improvements to the network in new British cities and a major cloud network from Nokia.

"It's clear that customers and companies want more and more data," Chief Executive Dave Dyson said in a statement.

"We have worked hard over a long period of time to be able to provide the best service for the fifth generation from the beginning to the end, the fifth generation changes the game for the company."

EE launched its fifth generation services in six cities in May, while Vodafone will launch service on July 3. The two companies took out Huawei's smart phones from fifth-generation upgrades because of the uncertainty about Google's support for Google's Android phones after the move by the United States, which is blocking the Chinese company's access to its technology.
She said she would announce the details of the devices that will be part of the launch of the network in July.

The United States said Huawei posed a security threat and was open to spying by Beijing, a claim denied by the Chinese company.

The British National Security Council decided in April to ban Huawei from dealing with all key parts of the future fifth generation network and give them controlled access to non-essential parts, but the government has not yet made a final decision.

The fifth generation network is expected to be commercially launched globally in 2020.

Inside the Amazon Warehouse Where Humans and Machines Become One

They call me the Master of Robots—or at least they should. I grab a flat package, hold its barcode under a red laser dot, and place it on a small orange robot. I hit a button to my left and off zips the robot to do my bidding, bound for one of more than 300 rectangular holes in the floor corresponding to zip codes. When it gets there, the bot engages its own little conveyor belt, sliding the package off its back and down a chute to the floor below, where it can be loaded onto a truck for delivery



This is not an experimental system in a robotics lab. These are real packages going to real people with the help of real robots in Amazon’s sorting facility of tomorrow, not far from the Denver airport. With any luck, my robot friend and I just successfully shipped a parcel to someone in Colorado. If not—well, blame the technology, not the user.

Seen from above, the scale of the system is dizzying. My robot, a stubby mobile slab known as a drive (or more formally and mythically, Pegasus), is just one of hundreds of its kind swarming a 125,000-square-foot “field” pockmarked with chutes. It’s a symphony of electric whirring, with robots pausing for one another at intersections and delivering their packages to the slides. After each “mission,” they form a neat queue at stations along the periphery, waiting for humans to scan a new package, load the robots once again, and dispatch them on another mission.

You don’t have to look far to see what a massive shake-up this is for the unseen logistics behind your Amazon deliveries. On the other side of the building are four humans doing things the old way, standing at the base of a slide flowing with packages. Frenetically they pick up the parcels, eyeball the label on each, and walk them over to the appropriate chutes. At the bottom of the chutes, yet more humans grab the packages and stack them on pallets for delivery. It’s all extremely labor-intensive and, in a word, chaotic.

Amazon needs this robotic system to supercharge its order fulfillment process and make same-day delivery a widespread reality. But the implications strike at the very nature of modern labor: Humans and robots are fusing into a cohesive workforce, one that promises to harness the unique skills of both parties. With that comes a familiar anxiety—an existential conundrum, even—that as robots grow ever more advanced, they’re bound to push more and more people out of work. But in reality, it’s not nearly as simple as all that.

If only the Luddites could see us fulfilling online orders now.

This Colorado warehouse is, in a way, a monument to robots. It’s not one of the Amazon fulfillment centers you’ve probably heard of by now, in which humans grab all the items in your order and pack them into a box. This is a sorting facility, which receives all those boxes and puts them on trucks to your neighborhood. The distinction is important: These squat, wheeled drives aren’t tasked with finely manipulating your shampoos and books and T-shirts. They’re mules.

Very, very finely tuned mules. A system in the cloud, sort of like air traffic control, coordinates the route of every robot across the floor, with an eye to potential interference from other drives on other routes. That coordination system also decides when a robot should peel off to the side and dock in a charger, and when it should return to work. Sometimes the route selection can get even more complicated, because particularly populous zip codes have more than one chute, so the system needs to factor in traffic patterns in deciding which portal a robot should visit.

“It's basically a very large sudoku puzzle,” says Ryan Clarke, senior manager of Special Amazon Robotics Technology Applications. “You want every column and every row to have an equal amount of drops. How do we make sure that every row and every column looks exactly equal to each other?” The end goal is to minimize congestion through an even distribution of traffic across the field. So on top of tweaking the robots’ routes, the system can actually switch the chute assignments around to match demand, so that neither the robots nor the human sorters they work with hit any bottlenecks.

To map out all this madness, Amazon runs simulations. Those in turn inform how the drives themselves should be performing. What’s the optimal speed? What’s the optimal acceleration and deceleration, given you want the deliveries to be as efficient as possible while keeping the robots from smashing into one another? After all, a bump might toss a package to the ground, which other robots would spot with their vision sensors and route around, adding yet another layer of complexity to the field. (The robots have sensors on either end of their conveyor belt, by the way, so if a package starts to slip off the side, the belt automatically engages to pull the package back on.)

The temptation might be to get these machines moving as quick as possible. “But it would be like having a Ferrari in downtown San Francisco—all you're doing is stop and go,” says Clarke. “We looked at tuning it to many different parameters and found that more speed and more acceleration actually had a reverse effect. They were just bumping into each other and causing more pileups.”

Ready for more complexity? Amazon had to tweak the built space itself to keep the machines happy. Humans doing things the old way on the other side of the building, for instance, enjoy basking in the photons that pour through skylights. Above the robots’ field, though, the skylights are covered, because the glare might throw off the machines’ sensors. To navigate, they’re using a camera on their bellies that reads QR codes on the ground. Even the air-conditioning units hanging from the roof are modified. On the human side, they blow air straight downward, but above the robots they blow out to the side, because gusts of air could blow light packages off the machines’ conveyor belts.
LEARN MORE
The WIRED Guide to Robots

Worse yet, precarious packages like liquids could send the system into chaos. So although the system is automated, humans still monitor the robots on flatscreens below the field, where the packages come down the chutes, and respond to crises. “Think about if I had a package and it had a gallon of paint in it, and that gallon of paint was damaged and it leaked down one of these chutes,” says Steve McDonnell, general manager of the sorting center. “Within minutes I'm able to shut that chute off, redirect drives to another chute, and I'm done.”

The key here is flexibility—not a word that first comes to mind when you think of robots. Flexibility in the robots’ pathways, in their destinations, in the number of robots on the field at once. You might, for example, think the more machines out there, the better. Amazon could deploy up to 800 drives simultaneously, but that could jam up the floor like traffic in a city. Instead, they’re typically operating 400 or 500, with others parked off to the side and waiting to be circulated in.

Beyond coordinating the robots themselves, there’s the question of how to make them good coworkers for the human employees. The humans’ job is to place packages in 6-foot-tall boxes below the field, taking care not to toss in heavy packages first. To make that work manageable, the robots have to distribute packages between the multiple chutes for a particular zip code, so a given chute doesn’t overflow. At the same time, the system considers how to best group packages downstairs by their departure time, so workers don’t have to run around hunting for them.

“The interaction between the associate and the drives is almost like a 3D chess set,” says McDonnell, “because you can optimize the drive field, but then you can make the associate's job harder below the field.”

Across the field from the human workers distributing packages to the drives, a prototype robotic arm, named Robin, sits at the end of a conveyor belt. Its “hand” is a vacuum manipulator, designed to snag boxes and flat packages.

This robotic arm is a test of what it might look like to further automate the work of shuffling packages around. The idea is that the conveyor will deliver packages to the arm, which would then load the drives. “We're going to feed it a little bit differently than we do with humans,” says Rob Whitten, senior technical program manager. “We're not going to just give it a pile coming down a chute—we're going to kind of toss it softballs. We're going to give it a little more structure so it can handle it.” For parcels it can't manipulate, like if they're too heavy or weirdly shaped, humans would step in to help.

As I walk down the line of human robot-loaders, I come across a worker who has set aside a broken box, which has spilled out bottles and other entrails. That uniquely capable human could do two things here: use his problem-solving skills to say, "Something is wrong, I need to set these aside," and then manipulate those objects with exceedingly fine motor skills.

This robot arm has neither problem-solving prowess nor fine motor skills. Imagine if clear laundry liquid had broken inside a package and soaked the bottom of the box. A human might smell the detergent or feel its stickiness before they see it. A robot arm relying on sight alone would miss the problem, loading the package on a drive robot that then snail-slimes the floor of the field.

Even if they had some semblance of judgment, robots are still awful at manipulating complex objects like bottles. That’s why Amazon is keeping it simple here, with a suction arm meant to stick to flat surfaces, as opposed to an analog of the human hand. For quite some time, humans will need to (nearly) literally hold these robots’ hands.
The bottom line is this: We humans have to adapt to the machines as much as the machines have to adapt to us. Our careers depend on it.

Amazon runs simulations to figure out how to keep their human workers comfortable when loading robots with packages. This includes their range of movement from an ergonomics standpoint and their safety. Or such questions as how best for a human to grab a parcel, scan it, place it, and reach over to hit the button that sends the robot on its way. “There's an art to making it feel seamless between what the robot is doing and what the humans are doing,” says Brad Porter, VP of robotics at Amazon.

It’s the kind of dynamic environment that’s perfect for the development of Amazon’s next iteration of its system. The company is working on a new modular robot called Xanthus with different attachments, say to hold containers instead of using a conveyor belt. This machine will in a sense bridge the divide between fulfillment centers, where humans are loading products into boxes by hand, and sorting centers, where they’re mostly working with those assembled boxes.

‘Homework gap’ shows millions of students lack home internet

‘Homework gap’ shows millions of students lack home internet

 

HARTFORD, Conn. (AP) — With no computer or internet at home, Raegan Byrd’s homework assignments present a nightly challenge: How much can she get done using just her smartphone?
On the tiny screen, she switches between web pages for research projects, losing track of tabs whenever friends send messages. She uses her thumbs to tap out school papers, but when glitches keep her from submitting assignments electronically, she writes them out by hand.
“At least I have something, instead of nothing, to explain the situation,” said Raegan, a high school senior in Hartford.
She is among nearly 3 million students around the country who face struggles keeping up with their studies because they must make do without home internet. In classrooms, access to laptops and the internet is nearly universal. But at home, the cost of internet service and gaps in its availability create obstacles in urban areas and rural communities alike.

In what has become known as the homework gap, an estimated 17% of U.S. students do not have access to computers at home and 18% do not have home access to broadband internet, according to an Associated Press analysis of census data.

Until a couple of years ago, Raegan’s school gave every student a laptop equipped with an internet hot spot. But that grant program lapsed. In the area surrounding the school in the city’s north end, less than half of households have home access.

School districts, local governments and others have tried to help. Districts installed wireless internet on buses and loaned out hot spots. Many communities compiled lists of wi-fi-enabled restaurants and other businesses where children are welcome to linger and do schoolwork. Others repurposed unused television frequencies to provide connectivity, a strategy that the Hartford Public Library plans to try next year in the north end.

Some students study in the parking lots of schools, libraries or restaurants — wherever they can find a signal.

The consequences can be dire for children in these situations, because students with home internet consistently score higher in reading, math and science. And the homework gap in many ways mirrors broader educational barriers for poor and minority students.

Students without internet at home are more likely to be students of color, from low-income families or in households with lower parental education levels. Janice Flemming-Butler, who has researched barriers to internet access in Hartford’s largely black north end, said the disadvantage for minority students is an injustice on the same level as “when black people didn’t have books.”

Raegan, who is black, is grateful for her iPhone, and the data plan paid for by her grandfather. The honors student at Hartford’s Journalism and Media Academy tries to make as much progress as possible while at school.

“On a computer — click, click — it’s so much easier,” she said.

Classmate Madison Elbert has access to her mother’s computer at home, but she was without home internet this spring, which added to deadline stress for a research project.

“I really have to do everything on my phone because I have my data and that’s it,” she said.

Administrators say they try to make the school a welcoming place, with efforts including an after-school dinner program, in part to encourage them to use the technology at the building. Some teachers offer class time for students to work on projects that require an internet connection.

English teacher Susan Johnston said she also tries to stick with educational programs that offer smartphone apps. Going back to paper and chalkboards is not an option, she said.

“I have kids all the time who are like, ‘Miss, can you just give me a paper copy of this?’ And I’m like, ‘Well, no, because I really need you to get familiar with technology because it’s not going away,’” she said.

A third of households with school-age children that do not have home internet cite the expense as the main reason, according to federal Education Department statistics gathered in 2017 and released in May. The survey found the number of households without internet has been declining overall but was still at 14 percent for metropolitan areas and 18 percent in nonmetropolitan areas.

A commissioner at the Federal Communications Commission, Jessica Rosenworcel, called the homework gap “the cruelest part of the digital divide.”

In rural northern Mississippi, reliable home internet is not available for some at any price.

On many afternoons, Sharon Stidham corrals her four boys into the school library at East Webster High School, where her husband is assistant principal, so they can use the internet for schoolwork. A cellphone tower is visible through the trees from their home on a hilltop near Maben, but the internet signal does not reach their house, even after they built a special antenna on top of a nearby family cabin.

A third of the 294 households in Maben have no computer and close to half have no internet.

Her 10-year-old son, Miles, who was recently diagnosed with dyslexia, plays an educational computer game that his parents hope will help improve his reading and math skills. His brother, 12-year-old Cooper, says teachers sometimes tell students to watch a YouTube video to help figure out a math problem, but that’s not an option at his house.

On the outskirts of Starkville, home to Mississippi State University, Jennifer Hartness said her children often have to drive into town for a reliable internet connection. Her daughter Abigail Shaw, who does a blend of high school and college work on the campus of a community college, said most assignments have to be completed using online software, and that she relies on downloading class presentations to study.

“We spend a lot of time at the coffee shops, and we went to McDonald’s parking lot before then,” Abigail said.

At home, the family uses a satellite dish that costs $170 a month. It allows a certain amount of high-speed data each month and then slows to a crawl. Hartness said it’s particularly unreliable for uploading data. Abigail said she has lost work when satellites or phones have frozen.

Raegan says she has learned to take responsibility for her own education.

“What school does a good job with,” she said, “is making students realize that when you go out into the world, you have to do things for yourself.”

Tecno Camon i4 Summary

Tecno Camon i4 Summary

Tecno Camon i4 smartphone was launched in March 2019. The phone comes with a 6.20-inch display with a resolution of 720x1520 pixels and an aspect ratio of 19.5:9.
Tecno Camon i4 is powered by a 2GHz quad-core MediaTek Helio A22 processor. It comes with 2GB of RAM



The Tecno Camon i4 runs Android 9.0 Pie and is powered by a 3,500mAh battery. The Tecno Camon i4 supports proprietary fast charging.
As far as the cameras are concerned, the Tecno Camon i4 on the rear packs a 13-megapixel primary camera with an f/1.8 aperture; a second 8-megapixel camera and a third 2-megapixel camera. It sports a 16-megapixel camera on the front for selfies.
The Tecno Camon i4 runs HIOS 4.6 based on Android 9.0 Pie and packs 32GB of inbuilt storage that can be expanded via microSD card (up to 256GB) with a dedicated slot. The Tecno Camon i4 is a dual-SIM (GSM and GSM) smartphone.
Connectivity options on the Tecno Camon i4 include Wi-Fi 802.11 b/g/n, GPS, Bluetooth v5.00, USB OTG, Micro-USB, FM radio, 3G, and 4G (with support for Band 40 used by some LTE networks in India) with active 4G on both SIM cards. Sensors on the phone include accelerometer, ambient light sensor, compass/ magnetometer, gyroscope, proximity sensor, and fingerprint sensor.
The Tecno Camon i4 measures 156.90 x 75.80 x 7.96mm (height x width x thickness) . It was launched in Aqua Blue, Champagne Gold, Midnight Black, and Nebula Black colours.
As of 10th June 2019, Tecno Camon i4 price in India starts at Rs. 8,888.