The dark side of AI

 

There are two sides to everything, including innovation.

Take AI. It can be turned to good, as discussed last week, but it can just as easily be used by the dark side.

The dark side of AI holds consequences for the real world, both socially and personally.

Stuff like dating, which I, as a 22-year-old single woman, take very personally.

An article in Gizmodo describes the Future of Online Dating and that future is brutal. Human relations are usually based on games, emotions, negotiations, etc.

Online dating itself kills some of that; you know both you’re looking and the most of the play is lost from that moment: depending on the app, you know you’re both ready for sex and maybe more — spending time together and even a relationship — otherwise you would never open the app or site, but at lease they offer some space for the fun of maneuvering.

The future of online dating looks more like breed selection when you take 2 dogs of the same breed, with genetic desirability, compatible traits and they give you nice puppies.

How? The AI algorithm will take a look at your social media, reveal a lot of stuff about you (who you are, what you like, friends, family and more; check out article for detail, it’s terrifying and depressing) and give you the perfect match.

Thank you, AI, but I prefer to remain  a messy human, after all.

AI is a big player in enabling sites to addict us to increase their revenue. More and more, AI tells us what to buy (think Amazon suggestions) and, taking a page from game makers, helps online businesses and social media increase the addictiveness of their sites through profiling and data analysis. And Miki also wrote about the dark side in When What You See Ain’t What You Get.

Because my company builds AI software for drones that take on dangerous jobs, so humans don’t have to risk their life, I’ve been talking with many people about the two sides of new technology.  All of us, including the engineers, hope AI will only be used in positive ways.

But none of us are so naïve that we believe that will happen.

AI, what’s good you have for me

 

We never know where new inventions will lead us. We tend to overestimate it in short-term and underestimate the long. We also tend to see only the positive forgetting that most everything has a dark side, too.

So just keep that in mind when we’re talking about AI – there will definitely be no robots walking around the streets in the next few years, but you can’t even begin to imagine what the world will be for you // your children // grandchildren will find themselves in 50 years.

It is because the world is so full of mysteries and technology is so unpredictable that it fascinates me so.

Today I want to talk not about the future, but about today. I really enjoy tracking the tech news, especially AI, because, as you know, it’s what my company does, so I’m actually involved.

Today I want to share some of the exciting news on the positive side of AI development; these are my top four.

We’ve gotten used to hearing about the innovative stuff that happens in healthcare – NTR Lab once worked with a local university on a project for tumor image recognition — the results were fascinating.

While serious diagnostic and treatment breakthroughs are amazing, they aren’t the health problems most of us face in our everyday life.

However, mental health is something that, while often not discussed, most of us deal with to some degree, even if it’s just a down day. That’s why I find the suicide trends tracker so amazing.

The second one I’m undecided if it’s good or bad, so writing about it is a bit confusing. Take a look at article about facial recognition.

It’s about modern AI algorithms catching crimes in crowd. I find this both scary and encouraging at the same time. It’s obviously good for public safety — finding criminals or missing people (especially children). But thinking about being tracked wherever I go and it’s not so great anymore.

Third, the way history and sociology can really benefit from using AI and related technologies in their studies, such as modern 3D models. It’s probably too early, but the idea that AI could contribute something revolutionary to historical studies, such as this effort to recover lost languages, makes me feel good —  assuming, of course, that society pays attention to the results.

Finally, if technology can empower people by getting rid of social gaps, then it has the potential to make society more homogenous and, hopefully, friendly.

Tech, such as the wearables that allow blind runners to run independently, are the future.  Maybe once there really is no difference — and I don’t mean pity, toleration or politically correct stuff — between us and people who are not like us there will be more inclusion, collaboration and even a bit more peace.

All the result of technology filling the gap.

And I find that really cool.

The Seven Deadly Sins of AI Predictions

 

I love working at NTR Lab, partly because of the people, but also because we have a huge number of AI-related projects and really a strong R&D department. It’s fun to connect what I read in the news with what is going on in my company.

I’ve written a lot about our drone team, AI, etc. But because I work with this kind of stuff I can’t avoid thinking about in a philosophical way, as I’m sure many of you do. That’s why I want to share an article from the MIT Technology Review called “The Seven Deadly Sins of AI Predictions.” 

We’ve all heard about how robots will take our jobs in few years; I’ve written about this before. Because I am just 22, this is a major concern of mine and I like to keep eye on research and the thoughts in the field.

Sometimes surfing the Internet I see scary predictions about how robots take advantage of us — it sounds so sci-fi/non-realistic, with the terrifying predictions of the future for me and my kids and, of course, Terminator music playing in the background.

The MIT article has fresh point of view that I haven’t seen before and, for me, it really makes its point.

In short, the article says that predictions of a future full of robots are based on none-information, just dreams about the Singularity.

We believe in them in a non-logical way, because living with the speed of progress has made us believe everything that sounds more or less relatable.

The author describes seven reasons as to why people are making these kinds of predictions.

I really enjoyed the way he explains complicated philosophical theories and social differences between ages and technological eras.

While most of his theses are relatively simple and recognizable (if you are familiar with the tenets of philosophy) they are well-executed and extremely readable. And the illustrations are a nice addition.

It’s a short read and well worth your time.

‘Like’ it and let me know what you think.

 

Next week we’re going to share the “recipe” of implementing AI into your product – our thoughts and experiences.

For today I would like to bring you some fun – I collected some IT-related memes just for you.

As for me, memes are now very important part of our culture, like really fast and flexible stream of everyday reflection.

Humour was always a very important part of thinking about the everyday life and its issues and I really enjoy the way it developed itself.

2eKlpt92BaY b0UJLNMYsP8 Kba3msPYKKw B1vxxdV5Fok p1zFgAHSw_Q NHhjM-G5z1o 0CbepfCpLRg

 

When future is now

 

^C2B79C3AB74E340406CB61415424563592B24ED6004438CA9B^pimgpsh_fullsize_distr

My boss recently said that sometimes he doesn’t understand what his kids are saying.

One is fond of football, the other likes memes. When they talk among themselves they use the words and meanings of their peers. In other words, subcultural slang; they give little thought to the effects on our language.

I get it; even more so after reading about Facebook’s AI creating its own unique language.

I often wonder how the globalization and interference of non-human creators will affect the future of language — will we still be able to understand each other in 100 years?

In the Facebook case, while experimenting with language learning, a research algorithm created its own language that humans could not understand to communicate more efficiently between chatbots.

It was functional in that it continued to carry information, but uncontrollable, because researchers had no idea what was being “said.”

The result was intriguing because it showed the algorithm’s capacity for generating its own encoding scheme, but also showed what can happen with unconstrained feedback in an automated social language product.

I think the idea that someday software could be “alive” and “conscious” is an intriguing possibility, but I wonder if humans have the skill and forethought to deal with it.

What do you think?

Indoor drone challenges SOLVED

 

Last week we talked about the inherent challenges when flying industrial UAVs in enclosed spaces. Today, I’d like to share how my company is addressing those challenges.

 

Regarding the first two challenges, i.e., the lack of GPS and the absence of radio signals: NTR’s drone is truly autonomous and unmanned. In fact, the inspector only goes inside a tank to launch the UAV or change a battery.

We accomplish this by using a combination of sonar and lidar to navigate walls and other obstacles. Precise positioning is realized using only on-board sensors  — rangefinders, optical flow, IMU, and ultrasonic — so no connection to the real world is needed.

And because one size doesn’t fit all we gave our UAV software the ability to work with any UAV frame, including drones that are ATEX-compliant.

 

Challenge 3 was the lack of light for imaging. We equipped our UAV with powerful, impulse LED lighting allowing it to shoot quality images suitable for photogrammetry and structural inspections/image recognition.

 

Next was the magnetometer problem — as in it doesn’t work. A bit of background: drones typically use magnetometers to navigate, move or turn against the 4 points of a compass.  North is usually stated as straight ahead, west is left, east is right, and south is backwards. However, the metal borders of a tank, for example, means the drone sees all directions as north, so it spins, and a spinning drone is not particularly efficient.

To fix that, our engineers removed the magnetometer and provided the drone with “eyes” in the form of 2 lidars on its head. This allows the drone to mathematically estimate its position against the wall and understand where forward and backward are; a third lidar measures height.

Currently, our drone only “remembers” its height, because the exact position isn’t needed in something like a tank. We are working on a solution that will allow the drone to remember its exact coordinates, in order to return to that exact place after battery replacement or the end of a shift.

 

The fifth challenge was providing the maneuverability required in tight spaces.

Drones may not feel, but they still make every effort to avoid hitting obstacles and walls that would damage it. Logically, the more precise the positioning the more effective it is functioning in tight spaces and avoiding accidents.

Our engineers did two things: they improved the algorithms and lowered the weight. They did this by equipping it with a 3-axis gimbal and powerful controllers, so it could operate longer. They also integrated systems, such as optical flow (optical navigation) or SLAM (simultaneous location and mapping), in its “brains.”

 

Our final challenge concerns environments full of edges and obstacles. Again, some background: optical flow is best used for environments with plain surfaces, such as tanks or tubes — metal walls and nothing else — then it is as simple as using a computer mouse; left is left and right is right. I find it amazing that the laser tech is the same in something as common as a mouse and as exotic as a drone.

However, when edgy surfaces are added — in a warehouse, ship tank, living room — optical flow is almost useless, because the drone will be unable to locate the corner and turn.

If SLAM algorithms are integrated into the UAV’s software it is more likely to locate the corner and turn. It does so by estimating and then predicting how the surrounding environment looks, constructing or updating a map of the unknown environment, and simultaneously keeping track of what’s going on around it. That said, SLAM is not as simple as it seems and requires almost total reconstruction of the drone.

That’s what we are working on now and will release soon. I look forward to sharing it with you. Keep in touch!

Challenges using Industrial UAV systems for indoor navigation

 

As you know, my company develops software and hardware for UAVs, but not the kind you usually read about.

While developing our indoor drone and the software for it, we faced and had to handle a lot of challenges, like these:

  1. The drone doesn’t know where it is because there are no GPS signals in places like steel tanks, tubes, or certain kinds of rooms and that makes standard drone navigation impossible.  Even when the drone has all the sensors needed to navigate obstacles, it’s still fairly useless unless it can place itself within the enclosure, whatever it may be.
  1. Frequently UAVs cannot be controlled over ordinary radio channels, because of surface reflection, which makes the need for “autonomous and unmanned” even more important. However, when dealing with various surfaces one size does not fit all, because each surface requires different custom features. And that’s why indoor drones stay indoors.
  1. Today’s cameras create amazing images, but they all have one thing in common: they require light to create images. The lack of sufficient light in tanks, tubes, etc., makes producing good images extremely challenging.
  1. UAVs are reliant on magnetometers when operating in places where GPS doesn’t work. However, magnetometers don’t always operate correctly; for example, electric motors generate strong magnetic fields and large chunks of ferrous metals can also affect the field.
  1. While drones are highly maneuverable they require space in which to do it. While they have no problem outside, it is much more difficult to fly in a tight, enclosed space, such as a tank or tube.
  1. Flying a UAV in the open air, or an empty room with plain surfaces, is very different from flying an environment full of edges and obstacles. Indoor navigation demands precise positioning to handle working goals, such as inspections, etc., as previously discussed. Edges and obstacles demand special technologies, such as SLAM, but they require substantial, additional hardware that adds weight. Because indoor drones are required to fly and maneuver in tight spaces can be neither large nor heavy.

Please join me next week to learn about the various approaches that address these challenges.

Also, if you know of other challenges, please share them in Comments and I’ll do my best to address them, too.

The drone vs everyday life

 

For many of us drones, AKA, UAV (unmanned autonomous vehicle), are something that fly like little helicopters and are used for surveillance. It’s something from movies about robots and cars that capture people’s imagination.

In fact, drones are tools that facilitate the work of people in everyday life, keeping them safe in difficult conditions. Like forklifts or tractors, drones are just another tool to help people.

I believe drones are friends. Take a look how many applications there are for UAV indoor flights.

  •  Indoor technical inspections

Drone can be used in such environments as ships, oil tanks, incinerators, mines, pipes, and planes.

1

  • Guides

MIT’s SENSEable City Lab developed a UAV system to guide students and visitors around the MIT campus.

2

  • Delivery

Drone delivery often requires entering and moving about an indoor environment

3

  • Landing on a car

Landing a drone on a car, especially a moving one, requires the same level of computer vision as flying indoors. Ford is considering using UAVs to guide autonomous vehicles.

4

  • Real Estate sales

For remote buyers online FPV images from a drone inside the property.

5

  • Image recording

For indoor sports and other activities.

6

  • Rescue operations

Our technology allows drones to navigate inside buildings destroyed by earthquakes, etc., and deliver supplies to survivors caught in the rubble until rescue teams dig them out.

A swarm of drones capable of navigating indoors can rescue people from buildings on fire and similar emergencies.

7

Please add your comments and options to help drones become more human friendly!

Distributed team: making hardware happen

 

As I mentioned last week, NTR has a new department dedicated to developing hardware for several clients around the world.

Yes, I said hardware. For years software was everything; even some of my friends were surprised when I mentioned we are supplying distributed teams to build hardware. They said, “What are you building? Why do startups (our main clients) need new hardware?”

I said, “Think UAV, AKA, drones.”

Xt_41xdB_Y8

Our hardware department was originally formed to do a major UAV project, along with some IoT and robotics projects. The department consists of Sasha, the team lead and lead engineer/developer and AI/computer vision specialist, Andrey, lead engineer and embedder, another Andrey, a hardware engineer and 3D printing and modeling specialist, Ruslan, engineer and embedder, and Lesha, a junior engineer, who is learning machine learning.

It all started when a long-time Dutch client came up with a new startup idea and came to NTR to make an MVP of it. The idea was to use drones for the technical inspection of oil tanks.

You see, oil tanks must be inspected for tech problems every 10 years. The process is very time-consuming, costly, and, most importantly, very dangerous for the human inspector.

WP_20170504_14_51_07_Pro[1] (2)
This is how oil tanks look from the inside. The first drone’s field testing

It’s been a very challenging project, because the steel tank’s borders are reflective, which means they reflect all commonly used types of signals — GPS, Wi-Fi, BlueTooth or radio — so the UAV must do everything on its own, which means the implementation of fully coated surface flight algorithms and obstacle flight algorithms. 

And to make it even more difficult there is a serious lack of light meaning it’s extremely challenging to shoot high resolution quality images.

Снимок434
This picture was taken by drone

Worse still, Ruslan was seriously injured in car accident and spent several months in the hospital, but he’s OK now and back to work.

Accomplishing all this turned out to be far more difficult than anyone expected! It doubled our original estimate, but the scientific interest was so strong that NTR decided to complete the project ourselves. The result is that on our own we built a drone that is close to DARPA FLA  requirements among commercial drones, which is something we are very proud of!

Here’s a few videos of our drone in action.


Just checking the world tech news I’m amazed at how fast progress is. A few years ago we didn’t know what a drone was (except in sci-fi) and today we get packages from Amazon drone-delivered, DHL does drone-based medicine delivery service in Germany, Google is testing its own drone-delivery service in Australia, Walmart uses drones to inventory its warehouses, the United Arab Emirates is working on a system to use drones to transport government documents, military uses are beyond counting, and both Walmart and Amazon sell drones to consumers.

So, how does it feel to be a part of something that big and fast-growing?

Mindboggling and majestic.