Tag Archives: #AI

The dark side of AI

 

There are two sides to everything, including innovation.

Take AI. It can be turned to good, as discussed last week, but it can just as easily be used by the dark side.

The dark side of AI holds consequences for the real world, both socially and personally.

Stuff like dating, which I, as a 22-year-old single woman, take very personally.

An article in Gizmodo describes the Future of Online Dating and that future is brutal. Human relations are usually based on games, emotions, negotiations, etc.

Online dating itself kills some of that; you know both you’re looking and the most of the play is lost from that moment: depending on the app, you know you’re both ready for sex and maybe more — spending time together and even a relationship — otherwise you would never open the app or site, but at lease they offer some space for the fun of maneuvering.

The future of online dating looks more like breed selection when you take 2 dogs of the same breed, with genetic desirability, compatible traits and they give you nice puppies.

How? The AI algorithm will take a look at your social media, reveal a lot of stuff about you (who you are, what you like, friends, family and more; check out article for detail, it’s terrifying and depressing) and give you the perfect match.

Thank you, AI, but I prefer to remain  a messy human, after all.

AI is a big player in enabling sites to addict us to increase their revenue. More and more, AI tells us what to buy (think Amazon suggestions) and, taking a page from game makers, helps online businesses and social media increase the addictiveness of their sites through profiling and data analysis. And Miki also wrote about the dark side in When What You See Ain’t What You Get.

Because my company builds AI software for drones that take on dangerous jobs, so humans don’t have to risk their life, I’ve been talking with many people about the two sides of new technology.  All of us, including the engineers, hope AI will only be used in positive ways.

But none of us are so naïve that we believe that will happen.

The Seven Deadly Sins of AI Predictions

 

I love working at NTR Lab, partly because of the people, but also because we have a huge number of AI-related projects and really a strong R&D department. It’s fun to connect what I read in the news with what is going on in my company.

I’ve written a lot about our drone team, AI, etc. But because I work with this kind of stuff I can’t avoid thinking about in a philosophical way, as I’m sure many of you do. That’s why I want to share an article from the MIT Technology Review called “The Seven Deadly Sins of AI Predictions.” 

We’ve all heard about how robots will take our jobs in few years; I’ve written about this before. Because I am just 22, this is a major concern of mine and I like to keep eye on research and the thoughts in the field.

Sometimes surfing the Internet I see scary predictions about how robots take advantage of us — it sounds so sci-fi/non-realistic, with the terrifying predictions of the future for me and my kids and, of course, Terminator music playing in the background.

The MIT article has fresh point of view that I haven’t seen before and, for me, it really makes its point.

In short, the article says that predictions of a future full of robots are based on none-information, just dreams about the Singularity.

We believe in them in a non-logical way, because living with the speed of progress has made us believe everything that sounds more or less relatable.

The author describes seven reasons as to why people are making these kinds of predictions.

I really enjoyed the way he explains complicated philosophical theories and social differences between ages and technological eras.

While most of his theses are relatively simple and recognizable (if you are familiar with the tenets of philosophy) they are well-executed and extremely readable. And the illustrations are a nice addition.

It’s a short read and well worth your time.

‘Like’ it and let me know what you think.

When future is now

 

^C2B79C3AB74E340406CB61415424563592B24ED6004438CA9B^pimgpsh_fullsize_distr

My boss recently said that sometimes he doesn’t understand what his kids are saying.

One is fond of football, the other likes memes. When they talk among themselves they use the words and meanings of their peers. In other words, subcultural slang; they give little thought to the effects on our language.

I get it; even more so after reading about Facebook’s AI creating its own unique language.

I often wonder how the globalization and interference of non-human creators will affect the future of language — will we still be able to understand each other in 100 years?

In the Facebook case, while experimenting with language learning, a research algorithm created its own language that humans could not understand to communicate more efficiently between chatbots.

It was functional in that it continued to carry information, but uncontrollable, because researchers had no idea what was being “said.”

The result was intriguing because it showed the algorithm’s capacity for generating its own encoding scheme, but also showed what can happen with unconstrained feedback in an automated social language product.

I think the idea that someday software could be “alive” and “conscious” is an intriguing possibility, but I wonder if humans have the skill and forethought to deal with it.

What do you think?

Me and AI

 

credit: business insider
Google’s AI, Deep Dream, generated this.

Two weeks ago I shared a review of Geoff Colvin’s Humans are Underrated and promised to tell you more about why the AI discussion resonates so much with me.

Me in short: I was born and live in Tomsk, which is in Siberia. My mother is an economist and a military engineer by education. My father is air-conditioning engineer, but really proficient in modern tech just for fun. 

More importantly, I’m young, just 21, so all the talk about AI ( ) taking jobs and even making humans obsolete is worrying.

It’s not a sharp pain, but more like a subtle ache that you know you should do something about, but procrastinate, because it doesn’t seem all that urgent.

I read that robots will take over jobs and render humans superfluous.

And AI isn’t just a nebulous concept for me. I do business development for NTR Lab, which is considered an expert at building AI software and its component parts for clients.

Plus, my work means I talk to many of our clients — entrepreneurs creating new uses and applications for AI.

As a philosophy major this really bothers me; as someone who will live in the worlds described it sometimes scares me.

That’s probably why I refuse to focus on the “ache.”

It’s also why reading about Colvin’s book is so heartening.

It gives me hope that my studies in philosophy will give me the empathic edge I need to stay relevant.

Only time will tell.

 

Music with RNN: myth or reality?

 

Last week I shared my excitement about my involvement in NTR’s new work on neural networks/RNN and promised to share what I learn and backstories  about the project itself.

Remember when I told you about my “other job as front woman for Vkhore? Well, like most bands, compose a lot of our own music.

музыка

I know from my own experience how hard that is — composing isn’t some off-the-shelf hobby.

But what if people with no musical training (formal or not) and no technical skills could use a computer application, choose the style of music to generate and listen to the results right then and there?

Sounds more like science fiction, but so do a lot of AI projects.

On a more personal level, training our own RNN to compose music means I’ve been getting a crash course about Machine Learning.

Over the past week I’ve been reading up on recent developments in Machine Learning in general and, more specifically, neural networks composing music.

 

There are already a number of them in existence. There is Magenta from Google; a tutorial that allows people  to generate music with a recurrent neural network. But it’s a simple model without stellar musical results.

What I wanted to know is if an RNN is can actually learn to compose music that has well-defined parts, i.e., the structure of music: verses, choruses, bridges, codas, etc.

Based on my research, there’s already been a good deal of development to make that happen. Originally, music generation was mainly focused on creating a single melody. You might  be interested in the discussions on Hacker News and Reddit about a year ago.  More recently, work on polyphonic music modeling, centered around time series probability density estimation, has met with partial success.

NTR’s goal is to build a generative model using a deep neural network architecture that will create music with  both harmony and melody.

We want our RNN to be able to create music that is as close to music composed by humans as possible.

I asked my colleague and friend Natasha Kazachenko, who is responsible for training our neural network to generate music, several questions to better understand exactly what we are doing. (It’s much easier to learn about a highly technical subject when you work in tech with good friends who are patient enough to explain stuff to a non-techie.)  I will share her answers next week.

I learned long ago that it is normal human psychology to attribute human traits, emotions, even gender, to non-human entities and techies (yes, they are human) are no different.

NTR Lab’s neural network is female.

Her name is Isadei.

inceptionizm gallary

What do you know about neural networks?

As some of you know, I started my career as a professional journalist, so when my boss asked me if I thought I could distinguish the original text of a contemporary Russian poet from the text generated by artificial intelligence I said yes. I tried it on a test and scored 8 out of 10 correctly.

Score one for us humans, but I have to say that it got me curious about the AI and how it works. One thing I learned is that there are many examples of machine generated creative writing on the internet, such as CuratedA.

Curiosity + journalist = research, so I started reading. But remember. I am not a techie, although I work for a very cool tech company, so even though I wrote doesn’t mean I really understand. But I’m learning!

First, I learned that artificial neural networks are mathematical or computational models inspired by the structure and functioning of the biological neural networks that are found in the nervous system of living organisms. Dr. Robert Hecht-Nielsen, inventor of one of the first neurocomputers, defines a neural network as “…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”

Next, I learned about recurrent neural networks (RNNs). They are a class of artificial neural network architecture inspired by the cyclical connectivity of neurons in the brain that uses iterative function loops to store information.

RNNs have several properties that make them an attractive choice for sequence labeling. Because they can learn what to store and what to ignore, they are perfect to use for context information they also have several drawbacks that have limited their application to real-world sequence labeling problems.

In plain language, this means that while they accept many different types and representations of data and can recognize sequential patterns in the presence of sequential distortions they are a long way from being perfect.

That said, there are already many real-world situations where RNNs are being used, such as time series analysis, natural language processing, speech recognition and Medical Event Detection.

Okay, so we have an idea about what RNNs are, why they are exciting and how they work. But the most exciting to me was to find out there were already RNNs doing art or there are people figured out how to train RNN character-level language models.

There is the algorithm developed at the University of Tubingen in Germany, which can extract the style from one image (say a painting by Van Gogh), and apply it to the content of a different image or Google’s inceptionism technique that transforms images.

dreams of google neural network

(Google, esperimento "Inceptionism")
(Google, esperimento “Inceptionism”)

Neural networks have operated in visual arts as a creative mechanism and in the the study of human aesthetic preferences, but, most exciting to me as a musician, artificial neural networks have been used to actually generate music.

I learned that neural networks are fundamental to artificial life and that the entities created can become interactive art pieces or even create the art itself.

These lifelike forms rely on generative models, which are a rapidly advancing area of research. As these models improve and the training and datasets scale, we can expect samples that depict entirely plausible images or videos.

Deus ex machina!, I thought when I read that the net had begun to sound human — like Her, instead of Siri.  Of course, if your background is technical you are probably more interested in the way this stuff actually works.

As I said at the start, I’m not techie and reading through this stuff sometimes makes my head hurt, so why am I excited?  Because I work for a company that has started training our own recurrent neural network and I’m actually involved in the project!

I will tell you more next week.