Search This Blog

Monday, April 24, 2017

Review – Only Humans Need Apply


Only Humans Need ApplyOnly Humans Need Apply: Winners and Losers in the Age of Smart Machines
Thomas H. Davenport, Julia Kirby

In his most recent book, Tom Davenport, along with co-author Julia Kirby, provides an excellent entry point and framework for understanding the evolving relationship between smart people and smart machines. There’s a great deal of hand-wringing over technology encroaching on jobs of all sorts. This is hand-wringing that arises with every new technology innovation stretching back long before the days of Ned Ludd. Davenport and Kirby avoid the hand-wringing and take a close look at how today’s technologies—artificial intelligence, machine learning, etc.—are changing the way jobs are designed and structured.

They articulate their goal as

“to persuade you, our knowledge worker reader, that you remain in charge of your destiny. You should be feeling a sense of agency and making decisions for yourself as to how you will deal with advancing automation.”

In large part, they succeed. They do so by digging into a series of case histories of how specific jobs are re-partitioned, task by task, between human and machine. It’s this dive into the task-level detail that allows them to tell a more interesting and more nuanced story than the simplistic “robots are coming for our jobs” version that populates too many articles and blog posts.
Central to this analysis is to distinguish between automation and augmentation, which they explain as

“Augmentation means starting with what minds and machines do individually today and figuring out how that work could be deepened rather than diminished by a collaboration between the two. The intent is never to have less work for those expensive, high-maintenance humans. It is always to allow them to do more valuable work.”

They give appropriate acknowledgement to Doug Engelbart’s work, although the nerd in me would have preferred a deeper dive. They know their audience, however, and offer a more approachable and actionable framework. They frame their analysis and recommendations in terms of the alternate approaches that we as knowledge workers can adopt to negotiate effective partnerships between ourselves and the machines around us. The catalog of approaches consists of:

  • Stepping Up—for a big picture perspective and role
  • Stepping Aside—to non-decision-oriented, people centric work
  • Stepping In—to partnership with machines to monitor and improve the decision making
  • Stepping Narrowly—into specialty work where automation isn’t economic
  • Stepping Forward—to join the systems design and building work itself

Perhaps a little cute for my tastes, but it does nicely articulate the range of possibilities.

There’s a lot of rich material, rich analysis, and rich insight in this book. Well worth the time and worth revisiting.

The post Review – Only Humans Need Apply appeared first on McGee's Musings.



from McGee's Musings http://ift.tt/2pWR7te
via IFTTT

The depreciating value of human knowledge

Automation is just one facet on the broader spectrum of AI and machine intelligence. Yes, it's going to affect us all (it already is with the increasing emergence of intelligent agents and bots), but I think there is a far deeper issue here that - at least for the majority of people who haven't become immersed in the "AI" meme - is going largely unnoticed. That is, the very nature of human knowledge and how we understand the world. Machines are now doing things that - quite simply - we don't understand, and probably never will. 





I think most of us are familiar with the DIKW model (over-simplification if ever there was), but if you ascribe to this relationship between data, information, knowledge and wisdom, I think the top layers - knowledge and wisdom - are getting compressed by our growing dependencies on the bottom two layers - data and information. What will the DIKW model look like in 20 years time? I'm thinking a barely perceptible "K" and "W" layers!

If you think this is a rather outrageous prediction, I recommend reading this article from David Weinberger, who looks at how machines are rapidly outstripping our puny human abilities to understand them. And it seems we're quite happy with this situation, since being fairly lazy by nature, we're more than happy to let them make complex decisions for us. We just need to feed them the data - and there's plenty of that about! 

This quote from the piece probably best sums it up:

"As long as our computer models instantiated our own ideas, we could preserve the illusion that the world works the way our knowledge —and our models — do. Once computers started to make their own models, and those models surpassed our mental capacity, we lost that comforting assumption. Our machines have made obvious our epistemological limitations, and by providing a corrective, have revealed a truth about the universe. 

The world didn’t happen to be designed, by God or by coincidence, to be knowable by human brains. The nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it. Now that machines are acting independently, we are losing the illusion that the world just happens to be simple enough for us wee creatures to comprehend

We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it."

Should we be worried? I think so - do you?
Steve Dale




from 'KIN Bloggin' http://ift.tt/2pSSOrv
via IFTTT

Monday, April 10, 2017

The museum to markets – The Museum of Failures

As regular readers will note around here we tend to like markets. On the grounds that they generally - except where they don't - work. But it's important to understand what it is that markets generally work at and that's not success, not at all. Markets work well because they work well at failure.

Which is why we're rather tickled by this new Museum of Failure.

LEARNING IS THE ONLY WAY TO TURN FAILURE INTO SUCCESS

That's their tagline and we'd quibble a bit with it even though we agree with the general idea. Rather, as we'd put it, you can only succeed if you work out what's failing. Some of the ideas, like that Coke Blak, could have, might have, succeeded. They didn't. Others it's a bit more mysterious why they didn't succeed:

Bic For Her pens are also on display. The supposedly female-friendly pink and purple pens launched to widespread derision and mockery in 2012. "I mean, you know that women can't use regular pens. You need special pens for their delicate hands," West said. "And they're double the price of regular pens because they're specially for women." 

Quite why that didn't work is unknown, pink razors do cost more than blue as we're so often told they do. The important part of it though is this:

West told The Local.: 'You can fail at any point during the process. It's better to have a lot of cheap mistakes early in the process, than to do so on a large scale. Then it costs billions.'


That's why market systems work better than planned ones. What it is possible to do, what people want to have done, is an ever moving feast. The technology with which we can do things is always changing and so are personal tastes. We want thus some method of sorting through what can be done and what people want to have done. And the finest way yet discovered of doing this is for every lunatic to try. We, the rest of us, will then sort through what is available and decide upon which of these possible things that can be done add utility to our lives.

Imagine wandering into GOSPLAN one day to explain that we need an overlay to the telecoms network so that people can swap cat pictures with each other. Mr. Zuckerberg would have been laughed out of the room and yet 2 billion people later that experiment he cooked up in a dorm room seems to add utility to some number of lives.

And thus the glory of that Museum of Failure, it's a museum to why markets work. Precisely and exactly because so many innovations get absolutely nowhere - that's how we find out which ones we want.



from Adam Smith Institute http://ift.tt/2oXTcos
via IFTTT