•   
  •   
  •   

Technology Chrome will use AI to describe images for blind and low-vision users

15:17  10 october  2019
15:17  10 october  2019 Source:   engadget.com

Chrome OS finally supports virtual desktops

Chrome OS finally supports virtual desktops Chrome OS is adding a mainstay feature that other computer platforms have had for years: virtual desktops. The newly released Chrome OS 76 supports "Virtual Desks" that, like elsewhere, lets you create app layouts you can switch to in a heartbeat. You could have a writing-focused space that revolves around Google Docs, for example, and another space that helps you keep up on your social networks. The update also brings easier media controls that include a centralized control in the system menu. And if accessibility is an issue, an Automatic Clicks option can activate whatever's under your pointer if you have difficulty with button presses.

The internet can be a difficult place to navigate for people who are blind or who have low vision . To address the issue, Google has announced a new feature for Chrome which will use machine It is based on the same technology which lets users search for images by keyword, and the description

To help blind and low - vision users , Google is using machine learning to generate descriptions for millions of images . Currently, the tool has labeled more than 10 million images during a few months of testing. It’s being slowly rolled out to users , and Chrome is promoting it specifically to people who

The internet can be a difficult place to navigate for people who are blind or who have low vision. A large portion of content on the internet is visual, and unless website creators use alt text to label their images, it's hard for users of screen readers or Braille displays to know what they show.

a close up of a black keyboard

To address the issue, Google has announced a new feature for Chrome which will use machine learning to recognize images and offer text descriptions of what they show. It is based on the same technology which lets users search for images by keyword, and the description of the image is auto-generated.

"The unfortunate state right now is that there are still millions and millions of unlabeled images across the web," said Laura Allen, a senior program manager on the Chrome accessibility team. She understands the issue as she has low vision herself. "When you're navigating with a screen reader or a Braille display, when you get to one of those images, you'll actually just basically hear 'image' or 'unlabeled graphic,' or my favorite, a super long string of numbers which is the file name, which is just totally irrelevant."

Alexa’s Show and Tell feature IDs objects for blind and low-vision users

  Alexa’s Show and Tell feature IDs objects for blind and low-vision users For people with vision impairments, figuring out what's in a can or jar of food without opening it can be difficult or impossible. Amazon thinks it has a solution to that and other daily challenges that its blind and low-vision users face. Today, the company unveiled a new Show and Tell feature that allows users to hold an item in front of an Echo Show and ask "Alexa, what am I holding?" Using computer vision and machine learning for object recognition, the Alexa-powered device will respond with its best guess.

Kuo also predicts a new low -cost iPhone SE model will land sometime in the first quarter of 2020 and sees the return to scissor-key MacBook keyboards (ditching its troubled butterfly keyboard mechanism) in the future, too.

TP-Link is introducing two Walmart-bound WiFi 6 routers, the Archer AX1500 and AX3000, that are focused on lower the price of entry for next-gen networking. The AX1500 you see above won't floor you with its 1.2Gbps peak speed (300Mbps on 2.4GHz), but it also costs just .

An example of a descriptive text given by the feature would be "Appears to be fruits and vegetables at the market" for an image of a market stall. The descriptions are couched with "appears to be" so users know they are generated by a computer and may not be fully accurate.

The feature is available only for users with screen readers that output spoken feedback or Braille. The images descriptions will be read by the screen reader, but will not appear visually on the screen.

To enable image descriptions on Chrome, go to Settings, then to Advanced at the bottom of the settings page. Find the "Accessibility" section and enable "Get image descriptions from Google." The feature can also be enabled for single web pages by right clicking to bring up the context menu and selecting "Get Image Descriptions from Google."

Google Chrome

Google plans to give slow websites a new badge of shame in Chrome .
Google experimenting with lots of different badge optionsA new badge could appear in the future that’s designed to highlight sites that are “authored in a way that makes them slow generally.” Google will look at historical load latencies to figure out which sites are guilty of slow load times and flag them, and the Chrome team is also exploring identifying sites that will load slowly based on device hardware or network connectivity.

—   Share news in the SOC. Networks

Topical videos:

usr: 6
This is interesting!