General Tech News · Software News For Professionals · Virus News

Nobody Likes Ransomware!

On May 12, a computer worm called WannaCry infected 320,000 Windows computers in 150 countries—and made headlines around the world. Here’s what you need to know.

Meet ransomware

Why the headlines? First, because WannaCry is one of the most widespread cases of ransomwaresoftware that encrypts all of the files on your PC, and will not unlock them until you pay the bad guys. In WannaCry’s case, you’re supposed to pay $300 within three days; at that point, the price goes up. If you still haven’t paid in a week, all your files are gone forever. (Here’s what it looks like if you’re infected.)

(Why can’t the authorities just track who the money’s going to, and thereby catch the bad guys? Because you have to pay in Bitcoin, which is a digital currency whose transactions are essentially anonymous.)

The second notable feature: The WannaCry malware took advantage of a security hole in Windows that had already been discovered by the U.S. National Security Agency (NSA). But instead of letting Microsoft (MSFT) know what it had found, the NSA kept it a secret and, in fact, decided to write a “virus” of its own to exploit it.

Ransomware is nasty. There’s no way out, no fix. And even if you pay up, there’s no guarantee you’ll get your files back; some of these ransomware people take your money and run. (Why can’t these low-life hackers have more of a sense of decency?)

How security holes get patched

So why doesn’t Microsoft fix Windows’s security holes? It does—all the time. For example, if you have Windows 10, you’re safe from WannaCry. And even if you have Windows 7 or 8, and you accept Microsoft’s steady flow of software updates, you’re fine, too; Microsoft patched this hole back in March.

The only people vulnerable to WannaCry are people running old versions of Windows, and people who don’t keep their Windows updated with Microsoft’s free patches.

Here’s the real irony: Typically, a researcher discovers a security hole in Windows—and quietly tells Microsoft. Microsoft’s engineers write and release a patch—for a hole the hackers hadn’t known about before. But the bad guys know that millions of people won’t install that patch. So they write the virus after Microsoft has fixed the hole! They get the idea from the fix.

In any case, ransomware loves to target corporate networks: hospitals, banks, airlines, governments, utility companies, and so on. These are places that often don’t regularly update their copies of Windows. (Lots of them still run Windows XP, which is 16 years old. Microsoft no longer supports Windows XP, but to its credit, it has written and released a patch to prevent WannaCry for Windows XP, too.)

How not to get ransomware

If you’d rather not get a ransomware infection on your PC, here’s what to do.

  • Back up your computer. I know you know. But only 8% of people backup daily, according to a 2016 poll of over 2,000 people. For $74, you can get a 2-terabye backup drive, and use your PC’s automatic backup software. Thereafter, if your files get locked by ransomware, you lose only a couple of hours as you restore from your backup. (For best results, keep the backup drive detached when you’re not using it, since some ransomware seeks out other connected drives.)
  • Turn on automatic updating of Windows. Get those patches before the bad guys do.
  • Don’t open file attachments you’re not expecting. Even if they seem to come from people you know. Don’t open zip files that come by email. Don’t ever click links that seem to be from your bank, or Google, or Amazon; they’re just trying to trick you into giving them your passwords.

Backup, turn on updating, don’t open email attachments you’re not expecting.

This has been a public service message.

Fun With Pinterest

Pinterest’s New AI For Foodies!

Pinterest has found perhaps the most delectable use of artificial intelligence and image recognition yet: to serve up recipes based on photos of meals you’re eating.

The eight-year-old San Francisco, Calif.-based startup rolled out an update on Tuesday that enables its AI-powered feature, Pinterest Lens, to detect and analyze what you’re eating in any given photo. Lens then suggests a recipe “inspired” by the food, meal or dish.

Say you snap a photo of jambalaya you’re chowing down on. Lens figures out you’re eating jambalaya, then recommends a jambalaya recipe for you to try. This is different from how many other companies are applying computer vision technology, a Pinterest spokesperson pointed out.

Facebook (FB) already uses image recognition to identify friends in photos and suggests tagging them. Meanwhile, a feature called “automatic alternative text,” released last year, enables visually-impaired Facebook users to hear a somewhat detailed description of the photo. A person using automatic alternative text on a photo of a group of people in the woods would hear, “This image may contain: Three people, smiling, outdoors.”

Lens joins a series of food recipe-related features Pinterest announced on Wednesday, including a new filter that let Pinterest features search by dietary preferences like vegan, vegetarian, gluten-free and paleo, as well as time filters that sift out recipes based on how long they take to make.

If Lens is accurate in detecting meals and offering relevant recipes, the feature would be a significant step forward for AI-powered applications aimed at mainstream users. Of, course whether the recipes it serves passes muster for those with a discerning palate, is another matter entirely.

All Things Google

Google Scrambles To Fix Android O’s Biggest Problem To Date

The next version of Android doesn’t have a name yet, only a letter. But “Android O”—which should get a dessert-based moniker when it ships later this summer—does have a set of features that Google (GOOG, GOOGL) pitched over the first day of its I/O developer conference here.

As in earlier updates, Android O brings a grab-bag of features. Some address lingering pain points in this mobile operating system, while others borrow from features Apple (AAPL) added to iOS. Another represents an overdue remedy for a problem that’s afflicted Android since its debut almost nine years ago: the zombie-like persistence of obsolete versions.

And of course, there’s better emoji support.

Project Treble: easing updates, we can only hope

The most important part of O—a rebuilding of Android’s foundation to remove an obstacle to timely software updates—barely got a mention in the almost-two-hour keynote from Google CEO Sundar Pichai and other Googlers that opened I/O at the Shoreline Amphitheatre Wednesday.

As of May 2, the current Nougat release that debuted last August runs on 7.1% of all devices that had connected to the Play Store in the prior seven days. The most widely used Android release was the two-year-old Marshmallow, on 31.2% of devices. At Apple, meanwhile, 79% of iOS devices that visited the App Store on Feb. 20 ran the current iOS 10 release.

Project Treble, announced in a blog post last week, aims to free chipset vendors from having to tweak the code that keeps their circuitry talking to the rest of Android. Treble will add a layer of translation code between that proprietary software and the rest of Android—the equivalent of putting a standard-size joint atop some intricate plumbing in the basement. A hardware vendor can write Treble-compliant, circuit-specific code once for a device and know that future versions of Android will understand it without further rewrites.

That won’t end all Android-update holdups. As this post from Ron Amadeo at Ars Technica explains, Treble won’t stop phone vendors from shipping weird Android interfaces (hello, Samsung!) that demand their own revisions. But it’s an important step in an operating system that now runs on more than 2 billion active devices.

Security and privacy

The afterlife of abandoned versions of Android remains the biggest problem in Android security, but many users worry instead that they’ll pick up malware in the Play Store. In reality, that’s a vanishingly small risk compared to the odds of getting hacked after downloading an app from elsewhere, thanks to a variety of malware scans that happen in the background.

Android O will add more layers of security hardening but will also make these app-safety checks visible in a Google Play Protect feature showing their status. It’s literally security theatre. As Stephanie Saad Cuthbertson, product-management director for Android, said in the keynote, “Most Android users don’t know these services come built into Android devices with Play.” But if it gets people to trust the Play Store over less-secure sources, it’ll be a worthwhile production.

This update will guard against a different device threat—a runaway app killing your battery life—by imposing limits on how often apps running in the background can ask for a device’s location or make other requests of the system. If that sounds like an overdue move… it probably is.

In the area of privacy, Android O will randomly assign different device IDs to apps—a small but significant change that will make it harder for a developer of multiple apps to correlate your use among them.

Notifications, picture-in-picture and other interface tweaks

Android O will require apps to group their notifications—the little nags that pop down from the top of the screen—into “channels” that you can turn on or off. It’s meant to stop apps from being too needy; in practice, having yet another option to set may not yield much difference.

You’ll also be able to snooze notifications, which may help avoid losing sight of yet another message from a friend coming in on yet another messaging app. These changes should certainly help Android’s notifications experience stay ahead of the same in iOS, where you can’t even clear all notifications unless you use a device with Apple’s Touch ID pressure-sensitive control.

App icons will be able to show a different sort of notice: colored “notification dots” at the top right corner of each that indicate something’s changed. That seems a pretty clear case of Google following Apple’s lead, not that there’s anything wrong with that.

A picture-in-picture option will pick up on the examples of some Android vendors by letting you watch a video clip or chat in one corner while taking notes or checking your calendar.

The interface change I’m most likely to appreciate: “Smart Text Selection,” in which Android will automatically select all of a street address, phone number or other significant block of text once you start trying to pick it up. This won’t work out of the box (as I saw in a demo phone), because Android will use “on-device intelligence” to build a phone-specific model of the kinds of data you often copy and paste.

By not syncing this personal data to the cloud—as Cuthbertson boasted, “without any data leaving the device”—Google borrows yet again from its neighbors at Apple.

The interface tweak everybody may notice first? A new “EmojiCompat” feature that should end the stigma of an iOS user sending a new emoji that doesn’t appear correctly on Android. Goodbye, blank boxes; hello, taco and unicorn emoji.

Amazon News

Amazon’s Alexa Imitates “The Jensons” With It’s New Ability!

For what was originally supposed to be a mail-order bookstore, Amazon (AMZN) sure is doing a lot of trailblazing.

I mean, Amazon came up with the idea for the Echo—the cylinder that serves as a sort of Siri for the home—all by itself. It invented that product category, putting Google, Apple, Microsoft, and Samsung into the awkward position of being copycat followers.

Now that more than 10 million people have Echo devices, Amazon has just taken another trailblazing step: With a free software update, it has turned them into hands-free speakerphones. Calling Chris is as easy as saying “Alexa, call Chris” from across the room, even if your hands are goopy with flour or you can’t find your phone.

Over at Chris’s house, the ring atop the Echo pulses green, a pleasant chime sounds, and Alexa announces, “David [or whatever your name is] would like to talk.”

Chris says “Alexa, answer,” and the conversation begins.

At the end of the call, either one of you can say “Alexa, hang up” to end the chat.

So whom can you call? Anyone in your phone’s address book who has either an Amazon Echo or the free Alexa app. That’s right: The Alexa app is now an internet calling app, like Skype or FaceTime Audio. Like them, it’s free and doesn’t use any cellular calling minutes. [Update: Not to be outdone, Google has now announced that it will bring hands-free calling to Google Home, its Alexa clone—except those calls go to regular phone numbers. No charge.]

By the way: Although the big-ticket item here is hands-free speakerphone calls, there’s also what Amazon calls messaging. It’s not what you’d think, though. It’s not sending text messages, exactly. And it’s not voicemail, exactly. It’s a cool kind of hybrid.

You say “Alexa, send a message to Chris,” and you’re invited to speak a message. You’re sending an audio recording. The ring at the top of Chris’s Echo glows green and chimes once; when Chris says, “Alexa, play my message,” your recording plays back.

But if Chris opens the Alexa app, your message also plays there, with an automated typed transcript. So it’s kinda like a text message in that way. Within the app, you can also send typed texts.

It’s also kinda like voicemail, in that you can leave a recorded message for someone—but the difference is that you’re in control. You decide to leave a message before you even call, rather than just hoping the other person doesn’t answer.

What it’s good for

At its finest, Alexa Calling is like a Jetsons version of the home phone. Not only is it cordless, it’s phoneless. You don’t have to find a handset, pick it up, press buttons, hold it up to your head; you just speak into the room. You may sound pretty echoey to the other guy if you’re really far from the Echo—but if you’re within a few feet, it sounds great.

And of course, if you’re using your phone instead of an Echo, it sounds just like a speakerphone call.

It’s likely that there are some people you contact often enough that the Alexa calling thing could be handy—a sibling, parent, child, boss, lover. Alexa calling is the communication equivalent of the One-Click Buy button on Amazon.com: It eliminates so many steps, so much friction, that you’re inclined to use it more.

What it’s not good for

There are plenty of limitations and footnotes to Alexa calling. These don’t mean that Alexa calling is worse than our existing communication methods—only that it’s got a different set of pros and cons.

  • Limited calling circle. You can call only people who have an Amazon Echo, Echo Dot, or the free Alexa app. You can’t call someone who has the battery-operated Echo Tap, and you can’t call someone’s regular cellphone number. You can call only someone who’s (a) in your phone’s Contacts, and (b) has made himself available for Alexa calling. (The setup takes about five taps, and requires typing in a security code that Amazon sends you via text message.) So it’s a pretty small circle—but then again, Skype, WhatsApp, FaceTime, and Snapchat started with small networks, too.
  • Everything rings simultaneously. When someone calls, all your Echos ring at once, and your phone app “rings.” In other words, you can’t use the Echos as an intercom within your house—but Amazon tells me that feature is coming soon. Very cool.
  • It’s all speakerphone. If you have an Echo, all calls are all speakerphone, all the time. Any family member can hear. Any family member can play back the messages, too. So, you know: sext with care.

Finally, at the moment, there’s no way to block incoming calls from specific people in your Contacts. You can turn on Do Not Disturb for all calls, but you can’t block just one idiot who’s abusing the privilege.

The tech blogs are having a field day with this one, calling it a “glaring security hole” and conjuring up the prospect of unwanted incoming calls from abusive ex-boyfriends and creepy pedophiles.

Frankly, though, the likelihood of this kind of abuse seems pretty slim. Your ex would have to know that you’ve got Alexa calling installed; would have to turn it on himself; would have to call you; and, upon hearing Alexa announce, “So and so would like to talk,” you’d have to say, “Alexa, answer.”

Above all, you’d have to keep your ex in your Contacts. And why would you do that?

In any case, Amazon says that it will add the option to block people within a few weeks.

Chalk up another Amazon invention

I’m already using Alexa calling for quick check-ins with my wife, my mom, and my assistant; it’s just super cool, easy, quick, and free. It’s got elements of a home phone line, a cellphone on speaker, and a walkie-talkie—but it’s not any of those.

Amazon has big plans for Alexa calling. We know that you’ll soon be able to direct calls to specific people or devices within your house. We know that you’ll be able to make video calls using the same steps, once the new Echo Look becomes available in June. (It’s an Echo with a screen and camera.) We know that, with permission from both parties, you’ll be able to “drop in” to peek through another Echo’s camera at any time—to keep an eye on an elderly relative, for example.

And I’ll bet that soon, Alexa will recognize who in your household is speaking (as Google Home does now), and will therefore maintain different message “boxes” for different people.

In other words, I love Alexa calling. It’s free, it’s well conceived, it works flawlessly, and it’s only beginning.

All Things Google

The Vice President For Google Home Has No Intention Of Letting Amazon Beat Google!

Rishi Chandra, Google’s vice president for all things Google Home gives us the inside scoop:

He covered what’s new (or coming very soon) in the Google Home device, but first he emphasized that none of it could have happened without the big new feature, person recognition. That is, the Google Home now knows who is speaking, and can deliver the answer based on that person’s calendar, work commute, music playlists, Uber account, and so on.

“It knows who’s talking,” Chandra states. “In the end, an assistant can only be so useful if it understands who you are, right? So, if my wife is asking something about her calendar, then it needs to answer with her calendar. And if I’m asking, it needs to be answered with my calendar. And that’s actually enabled all of the announcements we were making today.”

(And one more that Google didn’t make: Shortly, Google Home will let you add or edit calendar appointments and reminders by voice, and read email summaries to you by voice. What took so long? Simple, Chandra says: Those features didn’t make sense until the Home could tell who’s talking—whose calendar and email to check.)

So what are the big new features? First, conversations with Home no longer have to begin with you. If it notices something that you might find important—a traffic delay for an upcoming appointment, or a flight delay—its ring glows to get your attention. When you say “OK Google, what’s up?”, it gives you the bad news.

Second, free phone calls. “The most interesting and most exciting thing that we announced today is the ability to use Google Home to call different phone numbers. The ability to say, like, ‘hey Google, call mom.’ It can call any land-line or mobile number in Canada or US for free.”

Note that this is not the same thing that Amazon added to its Echo last week—free calls between Amazon (AMZN) Echos (or Amazon apps). This is free phone calls to phone numbers.

Finally, Google Home can now send certain of its responses to a nearby screen, like your phone or TV.

“For certain things, voice is not going to be good enough, right? You need to see a visual,” Chandra said. “So, in case I want to get maps, you’d get a notification right on your phone that says, ‘hey, you can actually go open a Google Maps right now with the exact location you’re trying to go.’”

Google has lagged Amazon in the number of smarthome or Internet of Things gadgets you can control by voice, too—but that’s about to change, Chandra says.

“We’re catching up significantly. When we launched Google Home, we had four partners. Today, any third party developer/device manufacturer can start interfacing with the assistant on their own. So, it’s a self-publishing tool. And so, we expect this to go from the 70 to hundreds of different integrations.”

The Amazon Echo, now in nearly 11 million homes, has had an impressive head start. But clearly, Google has no intention of settling for second place.

All Things Google

Google’s Assistant Is Getting Smarter And Faster!

A highlight: Google Assistant is growing up. (For the uninitiated, Assistant is Google’s version of Siri.)

First things first: What is Assistant? An app? A product? A feature?

Google’s vice president for Assistant, Scott Huffman, States “The Google Assistant is not an app or a device. What you really want from an Assistant is not just a thing that’s in one place. You want something that you can have a conversation with and get things done wherever you are, whatever context you’re in,” he says.

That could include your phone, your car, or your Google Home.

“I leave home,” Huffman says by way of example. “I say to my Google Home, ‘how late’s Home Depot open? Well, give me the directions.’ It should say, ‘Sure, they’re on your phone.’ As you walk out the door, the Assistant on your phone picks up the conversation.”

Assistant is built into every Android phone (long-press the Home button to bring it up)—but starting this week, it’s also available on the iPhone, as the Google Assistant app.

Either way, you can now type those questions and commands to Assistant instead of speaking them, if you prefer—something you can’t do with, say, Siri. Handy when it’d be inappropriate to talk aloud.

Then there’s Google Lens.

“Google’s been making deep investments in vision and machine perception,” Huffman says, “and so we’re building that into the Assistant. So now, I can just open the viewfinder inside the assistant and say, hey, what about this? And the assistant starts to give me options.”

For example, you can point the camera at a flower, a building, a painting, a book cover, a restaurant storefront. The Assistant recognizes what you’re looking at, and instantly gives you information: identification of the flower, ratings for the restaurant, and so on.

And not just details, but actions to choose. “One of the examples we showed is pointing the camera at a marquee of a show where it says, this band at this time. And then you get options like, you want to hear that band’s music? Do you want to buy tickets? You want to add to your calendar? You do want to share it with your friend?”

So just how smart can Assistant get? Huffman knows where he wants it to go.

“I can tell you how I say it to my team,” he says. “I say, ‘Hey guys, we’re just building this really simple thing. All it has to be is that anyone can have a conversation with it anywhere, anytime, with no friction.  We should understand that conversation, whatever it’s about. And then just do whatever they ask us to do. Let’s just build that.”

Sounds good. Get to it, team!

All Things Google

Google Lens. Game Changer Or Not? You Decide.

The Lens feature, which will be used on phones, sees what the viewer sees through the camera and provides information about the object. In a demonstration, Pichai showed the app correctly identifying a flower, inputting a Wi-Fi router’s password and SSID from the sticker, and giving a restaurant’s Google rating and reviews all when the phone camera was pointed at each object. Google wants to pre-empt your googling.

Google Lens follows other visual recognition products put out recently by other tech companies. Amazon, for instance, has had a product recognition tool built into its shopping app to allow users to see how much the company will undercut brick-and-mortar competitors for the same item. Samsung’s Bixby app can scan a photo of a business card and save the information as a contact, something more aligned with Google’s new capabilities.

Powering all this is new hardware from Google, Tensor Processing Units, or TPUs, which are behind Google’s AI training system. Users will never see these “deep learning” systems, however, because Google is all about the cloud doing the heavy lifting it takes for a computer to identify real-life stuff through its camera.

As the HBO show “Silicon Valley” illustrated on a recent episode with its “food Shazam” app, getting a camera to identify real-life stuff from a variety of angles, lighting situations, and with different phone cameras is quite the computational challenge. This time, however, Google isn’t buying these processors from Nvidia (NVDA), but is making its own, optimized to its software. (Nvidia was Yahoo Finance’s company of the year in 2016.)

Tech companies have become increasingly obsessed with the camera, seeing it as a gateway for integration between the virtual world and the real one. Snapchat (SNAP) calls itself a “camera company,” Facebook (FB) is doubling down on virtual reality, through phones, and smartphone manufacturers have been engaged in an arms race for camera quality and features.

For recognizing unknown objects like a flower species, Google Lens shows itself to be an extremely useful tool, a “Shazam” for the physical world. But its use of pointing the phone at a restaurant for info raises the question of what is too far. With GPS already on the phone and a compass showing your orientation, why would you even have to raise up the phone to get the restaurant in the camera? Still, the technology is impressive, and Google is showcasing an enormous amount of processing power that could be very useful.

General Tech News · Software News For Professionals

Salesforce Brings Forth Einstein Analytics To Help Professionals Optimize Their Daily Activities

Salesforce launched its Einstein Analytics app portfolio on Thursday, leveraging artificial intelligence (AI) to boost the analytics capabilities available to users on its CRM platform. According to a press release, it will help find new insights and recommend “actions to accelerate sales, improve customer service and optimize marketing campaigns.”

Customers already have access to some analytics tools in Salesforce, but Einstein Analytics is supposed to weave AI into those tools so that they provide more effective results. Analytics are more “important than ever before,” the release said, and the new offering could help users improve their approach, without having to write the algorithms themselves.

“With Einstein Analytics, every CRM user can now see not only what happened in their business, but why it happened and what to do about it, without requiring a team of data science experts,” Ketan Karkhanis, general manager of Salesforce Analytics, said in the release.

Some of the apps in the portfolio are specific to roles in areas like sales, customer service, and marketing, the release said. These apps measure a set of key performance indicator (KPIs) that are specific to that role, in order to help the user do their job more effectively. For example, apps specific to marketing professionals will offer certain actions to take to improve a campaign, based on the data presented, the release said.

Salesforce also launched Einstein Discovery, which provides “actionable AI” to users. Einstein Discovery checks the validity of trends in data, explains how it identified the trend, and walks users through next steps they can take to act on it, the release said. After looking at sales data, for example, Discovery can identify what factors most impact the closing of a deal, and how that varies by location and more, the release noted.

As reported by ZDNet’s Larry Dignan, business users can also build their own models in Discovery in order to glean insights from their data.

In order to help its users get started working with analytics, the release said, Salesforce has also released 12 online learning courses to build out user knowledge of Einstein Analytics. Additional apps in the Salesforce AppExchange provide professionals with a way to boost the power of Einstein Analytics as well, the release said.

Einstein Discovery is available now, starting at $75 per user, per month. Custom Einstein Analytics Apps, also available now, cost $150 per user, per month to start.

The 3 big takeaways for our readers

  1. Salesforce’s new Einstein Analytics app portfolio wants to add additional AI power to the platform’s analytics tools, making it easier for businesses to get real insights.
  2. Some Einstein Analytics apps geared toward sales, customer service, and marketing are built around the KPIs for those segments.
  3. Salesforce also launched an “Actionable AI” tool called Einstein Discovery, which allows users to build their own models for data analytics, among other features.
All Things Google

Key Announcements From Google’s Developer Conference

Like most developer conventions these days, its primary audience is software writers. But the opening keynote unveiled lots of developments that even non-nerds can understand: new features coming soon to Google products.

CEO Sundar Pichai opened his keynote speech with an observation: That Google (GOOG, GOOGL) may have begun life as a search company, but it’s now become an artificial intelligence (AI) company. Examples were everywhere.

Google Lens, Assistant, Photos

For example, he announced a new technology called Google Lens, which you can think of as Shazam for the whole world.

For example, you can point the camera at a flower, a building, a painting, a book cover, a restaurant storefront. The app recognizes what you’re looking at, and instantly gives you information: identification of the flower, ratings for the restaurant, and so on. Or you can point the camera at a marquee of a rock concert; Google Lens offers buttons for Play Music, Buy Tickets, or Add to Calendar.

Google Lens is part of Google Assistant, Google’s broader voice-assistant technology. Assistant is built into every Android phone, and is now available as an iPhone app. And you can now type questions and commands to Assistant instead of speaking them, if you prefer.

Google Photos, the company’s free, unlimited-storage online photo gallery, has always been able to identify who is in each photo. Now, if it spots, say, your brother in a photo, it offers to send that picture to him. Creepy, but convenient.

Google Home

Google announced improvements in its Google Home device, too (basically Google’s version of the Amazon Echo). For example, proactive notifications. If the Home learns something that you might find important—a traffic delay for an upcoming appointment, or a flight delay—its ring glows to get your attention. When you say “OK Google, what’s up?”, it tells you.

Second, free phone calls. You can say “OK Google, call mom,” and the Home acts as a futuristic speakerphone. It can call any land-line or mobile number in Canada or US for free. (This is not the same thing that Amazon added to its Echo last week—free calls between Amazon Echos or Amazon apps. This is free phone calls to phone numbers.)

Finally, Google Home can now send certain of its responses to a nearby screen, like your phone or TV. If you’ve asked for directions, it can throw a map onto your phone, for example.

Android ‘O’

Finally, Google announced the availability of the beta version of the next Android operating system for phones. The promised enhancements are almost comically small: a picture-in-picture mode for videos, redesigned emoji, faster startup, notification dots on app icons (like the iPhone does), color text in notifications.

There’s also a slimmed-down version of Android, called Android Go, for underpowered, cheap phones used in third-world countries. It uses far less horsepower and cellular data than the full-blown Android.

The most important new feature, Google didn’t even mention in the keynote: a new technology that may let Android phones upgrade to new versions of Android without having to wait a couple of years for the cellular company to get its actually together. (Details here.)

Overall, it’s clear that artificial intelligence and machine learning are indeed becoming Google’s new focus. Maybe next year, they should call the conference Google A/I.

Apple News

Over 20 New Great Features In Apple’s iOS11 For iPhone and iPad!

If there were one big lesson from the announcements at Apple’s developer conference Monday morning, it’s this: It’s getting harder and harder to add Big New Features to a phone operating system.

When iOS 11, the new, free iPhone/iPad OS upgrade comes this fall, you won’t gain any big-ticket feature. Instead, you’ll get a wholllllle lot of tiny nips and tucks. They seem to fall into five categories: Nice Tweaks, Storage Help, iPad Exclusives, Playing Catch-Up, Fixing Bad Design.

Nice Tweaks

Expectations set? OK—here’s what’s new.

  • A new voice for Siri. The new male and female voices sound much more like actual people.
  • One-handed typing. There’s a new keyboard that scoots closer to one side, for easier one-handed typing. (You can now zoom in Maps one-handed, too.)
  • Quicker transfer. When you get a new iPhone, you can import all your settings from the old one just by focusing the camera on the new phone on the old one’s screen.
  • Do not disturb while driving. This optional feature sounds like a really good one. When the phone detects that you’re driving—because it’s connected to your phone’s Bluetooth, or because the phone detects motion—it prevents any notifications (alert messages from your apps) from showing up to distract you. If someone texts you, they get an auto-response like, “I’m driving. I’ll see your message when I get where I’m going.” (You can designate certain people as VIPs; if they text the word “urgent” to you, their messages break through the blockade.)
  • Improvements to Photos. The Photos app offers smarter auto-slideshows (called Memories). Among other improvements, they now play well even when you’re holding the phone upright.
  • Improvements to Live Photos. Live Photos are weird, three-second video clips, which Apple (AAPL) introduced in iOS 9. In iOS 11, you can now shorten one, or mute its audio, or extract a single frame from that clip to use as a still photo. The phone can also suggest a “boomerang” segment (bounces back and forth) or a loop (repeats over and over). And it has a new Slow Shutter filter, which (for example) blurs a babbling brook or stars moving across the sky, as though taken with a long exposure.
  • Swipe the Lock screen back down. You can now get back to your Lock screen without actually locking your iPhone—to have another look at a notification you missed, for example.
  • Smarter Siri. Siri does better an anticipating your next move (location, news, calendar appointments). When you’re typing, the auto-suggestions above the keyboard now offer movie names, song names, or place names that you’ve recently viewed in other apps. Auto-suggestions in Siri, too, include terms you’ve recently read. And if you book a flight or buy a ticket online, iOS offers to add it to your calendar.
  • AirPlay 2. If you buy speakers from Bose, Marantz, and a few other manufactures (unfortunately, not Sonos), you can use your phone to control multi-room audio. You can start the same song playing everywhere, or play different songs in different rooms.
  • Shared “Up Next” playlist. If you’re an Apple Music subscriber, your party guests or buddies can throw their own “what song to play next” ideas into the ring.
  • Screen recording. Now you can do more than just take a screenshot of what’s on your screen. You can make a video of it! Man, will that be helpful for people who teach or review phone software! (Apple didn’t say how you start the screen recording, though.)

    Storage Help

    Running out of room on the iPhone is a chronic problem. Apple has a few features designed to help:

    • Camera app. Apple is adopting new file formats for photos (HEIF, or High Efficiency Image Format) and videos (H265 or High Efficiency Video Codec), which look the same as they did before but consume only the half the space. (When you export to someone else, they convert to standard formats.)
    • Messages in iCloud. When you sign into any new Mac, iPhone, or iPad with your iCloud credentials, your entire texting history gets downloaded automatically. (As it is now, when you sign in on a new machine, you can’t see the Message transcript histories.) Saving the Messages history online also saves disk space on your Mac.
    • Storage optimization. The idea: As your phone begins to run out of space, your oldest files are quietly and automatically stored online, leaving Download icons in their places on your phone, so that you can retrieve them if you need them.

    iPad Exclusives

    Many of the biggest changes in iOS 11 are available only on the iPad.

    • Mac features. In general, the big news here is the iPad behaves much more like a Mac. For example, you can drag-and-drop pictures and text between apps. The Dock is now extensible, available from within any app, and perfect for switching apps, just as on the Mac. There’s a new Mission Control-type feature, too, for seeing what’s in your open apps—even when you’ve split the screen between pairs of apps.
    • Punctuation and letters on the same keyboard. Now, punctuation symbols appear above the letter keys. You flick down on the key to “type” the punctuation—no more having to switch keyboard layouts.
    • A file manager! A new app called Files lets you work with (and search) files and folders, just as you do on the Mac or PC. It even shows your Box and Dropbox files.
    • Pencil features. If you’ve bought Apple’s stylus, you can tap the Lock screen and start taking notes right away. You can mark up PDFs just by starting to write on them. A new feature lets you snap a document with the iPad’s camera, which straightens and crops the page so that you can sign it or annotate it. Handwriting in the Notes app is now searchable, and you can make drawings within any Note or email message.

Playing Catch-Up

With every new OS from Google (GOOG, GOOGL), Microsoft (MSFT), or Apple, there’s a set of “us, too!” features that keeps them all competitive. This time around, it’s:

  • Lane guidance. When you’re driving, Maps now lets you know which lane to be in for your next turn, just as Google Maps does.
  • Indoor Maps. The Maps app can now show you floor plans for a few malls and 30 airports, just as Google Maps does.
  • Siri translates languages. Siri is trying to catch up to Google Assistant. For example, it can now translate phrases from English into Chinese, French, German, Italian, or Spanish. For example, you can say, “How do you say ‘Where’s the bathroom?’ in French?”
  • Siri understands followup questions. Siri now does better at understanding followup questions. (“Who won the World Series in 1980?” “The “Phillies.” “Who was their coach?”)
  • Person-to-Person payment within the Messages app. Now, you can send payments directly to your friends—your share of the pizza bill, for example—right from within the Messages app, much as people do now with Venmo, PayPal, and their its ilk. (Of course, this works only if your friends have iPhones, too.) When money comes to you, it accrues to a new, virtual Apple Pay Cash Card; from there, you can send it to your bank, buy things with it, or send it on to other people.
  • iCloud file sharing. Finally, you can share files you’ve stored on your iCloud Drive with other people, just as you’ve been able to do with Dropbox for years.

Fixing Bad Design

Some of the changes repair the damage Apple made to itself in iOS 10. For example:

  • Redesigned apps drawer in Messages. All the stuff they added to Messages last year (stickers, apps, live drawing) cluttered up the design and wound up getting ignored by lots of people. The new design is cleaner.
  • Redesigned Control Center. In iOS 10, Apple split up the iPhone’s quick-settings panel, called the Control Center, into two or three panels. You had to swipe sideways to find the control you wanted—taking care not to swipe sideways on one of the controls, thereby triggering it. Now it’s all on one screen again, although some of the buttons open up secondary screens of options. And it’s customizable! You can, for example, add a “Record voice memo” button to it.
  • App Store. The App store gets a big redesign. One chief fix is breaking out Games into its own tab, so that game and non-game bestseller lists are kept separate.

Coming this fall

There are also dozens of improvements to the features for overseas iPhones (China, Russia, India, for example). And many, many enhancements to features for the disabled (spoken captions for videos and pictures, for example).

So what’s the overarching theme of the iOS 11 upgrade?

There isn’t one. It’s just a couple hundred little fine-tunings. All of them welcome—and all of them aimed to keep you trapped within Apple’s growing ecosystem.