This article is from Hacker News, so I’ve lost most of the formatting and links to the commenters.
Complete hyperbolic garbage. Data, given a big enough set, is just a reflection of reality. You might as well claim we have a religion of our eyes and ears.
Given huge sets, much like reality, almost anything can be “proven” at some sort of local scale. It’s not a religion or a cult or a new god. It’s just observation.
Second, data is a recording of past events, and has surprisingly limited ability to predict future outcomes. There is a narrow window for the most instinctual tasks where it works well and there is a Tom of value there — but it comes no where close enough to be meta-cognitions ability to give you free will.
‘Given huge sets, much like reality, almost anything can be “proven” at some sort of local scale.’
I don’t think you’re right, I think that some things aren’t tractable to this kind of analysis. Obv. Godel type things, but if you believe in freedom of will and an open universe then a lot of other things. For example, the path of true love, the next line of a poem, the summer after next’s hot fashion trend.
More importantly (less) international events, earthquakes, solar flares.
Fractals, chaos and incompletness.
You can prove it retrospectively but you can’t predict it in the moment was what I was trying to say.
Given enough data, while you can’t predict exact actions, you can predict general trends with good accuracy. For example, given your complete post history and metadata about those posts and your views of HN, someone could fairly easily predict what subjects you will upvote, what subjects you’ll comment on (and a good idea of the tone of your comment), and so forth.
We’re creatures of habit, and once you have enough information to identify those habits, you can do a pretty good job at predicting what we will do. Heck, with the little data I have access to, I can be pretty sure which articles I will see comments from big posters like jacquesm and tptacek, and what their comments will contain. I can’t predict every story they will comment upon, nor the exact details of the comments, but at a higher level it is definitely predictable.
Valid points, but they won’t stop the average less self-analytical person from trusting Big Data, just as many are susceptible to altering their behavior because of some “study” they read about in the paper. Due to our (the masses) inability to follow the gnothi seauton aphorism, we are more likely to allow someone or something to make choices for us. See concluding paragraph of the essay.
Is it different today from 50 or 100 years ago? Doesn’t seem so different to me.
The article keeps confounding free will with authority over society, and morality itself. But these are all different (see footnote * ).
Harari complains that “Dataism” (and email) makes us “… tiny chips inside a giant system that nobody really understands”. But individuals always were just small parts of a great ecosystem that nobody understands.
But even with SMTP, we are still big enough to live our lives well for ourselves and loved ones. We can also try to improve society — but the results will be small and unpredictable.
He shows the same confusion of indivdual freedom with social control when he says:
in a humanist society, ethical and political debates are conducted in the name of conflicting human feelings,
Well that’s a pity, because in a liberal society, they should be debates about human rights. Your feelings about gay pride or religion should not give you authority to control others, but your rights might set some bounds.
As far as I can tell, Harari has learned that modern biology is starting to see life and the mind as an information system. He accepts this science, but doesn’t like it. So he tries to build some confused link to the Big Data giants.
Now there a good reasons, well known at HN, to disaprove of those guys. But Harari’s reasoning is not even wrong.
*: e.g. The RC church emphasises individual free will, claims its own temporal authority, and teaches that God is the ultimate moral authority.
I’m not certain where the haters are coming from. I thought it was well written and extremely interesting.
Sure - there are current limitations, and being able to predict what hasn’t happened yet is certainly difficult.
What was interesting to me though, is many of our most important choices in life have happened 1000s of times before. Should I buy this? Should I marry this person? What school should I go to or what career should I pursue? These are all questions that can, on average, be better answered with available data than potentially just following your gut. This is the position of the article, and I tend to agree.
While the mystical nature of the totality of the machine we are cogs in seems hyperbolic or unnerving, at a practical level it makes sense to model your interactions with the world that way. Taking that stance, the comparison to religion and humanism should be easy to follow, and reflect on. When are you “dataist”? When are you “humanist?” You’ll learn a lot about yourself simply asking that question.
Which makes for a great ending to the article - do you know yourself? Better than algorithm? Maybe not in all cases, and the data shows thats not necessarily a bad thing.
Please tell me exactly which algorithm, hyperparameters, processing chain and data sources are able to make better decisions than humans in ethical matters - reliably, consistently and with no regard to who is running the algorithm. What kind of questions do you want to answer and what kind of structure do you expect the answer to have?
You can’t answer, because the candidate algorithms are trade secrets, “under active development” or can only be run when babysit by trained specialists? Then it’s just a giant “Computer says no” where the implicit assumptions and biases of a relatively small group of humans sold as “objecive” or even “superhuman” by putting some layers of indirection between them and the public.
The big metaphor for life, mind, the universe, god, keeps changing. It was animals/spirits, then it was clockwork/machines, then it was information/data, and this article is a reflection of that. We keep thinking we are on the cusp of ultimate understanding, until our metaphor maxes out and we realize we’re not.
I believe the next metaphor will be ecology. The notion that an information processing agent can be understood in isolation from the ecology in which it operates (both in terms of energy/mechanics and information) is getting harder and harder to sustain. And the ecologies we humans rely on are dying quickly. We’ll need to turn that around sooner or later, by insight or by force.
The notion that an information processing agent can be understood in isolation from the ecology in which it operates (both in terms of energy/mechanics and information) is getting harder and harder to sustain
That is my main objection to the Chinese Room mental experiment. A room is not embodied, so it can’t learn like us. But an AI agent could be embodied and develop intelligent behavior.
The article is hyperbolic in the extreme and doesn’t really reflect the reality of these systems.
However. It is an important read, in my opinion.
Because this is well rendered and well formatted summation of the rhetoric people use to argue against scaled analytics and the collection of data. Understanding the counter arguments and motivation behind this article is a good step to interfacing with people uncomfortable with these ideas.
I think it’s important because it flags that there is a school of thought (that the author doesn’t side with or against) that denies humanism and theism and instead owns that observations and calculations are a better way of understanding our place in the universe than empathy with the human spirit or the purpose of a divine spirit.
This is an inversion of science’s place in ontology and epistemology (I hope I’ve got the spelling that indicates theory of knowledge, not vaginal surgery), previously science has not spoken about our inner lives and destiny, now people believe that it can say everything.
This is a shift that has happened twice before so it’s quite something.
For the past few centuries humanism has seen the human heart as the supreme source of authority not merely in politics but in every other field of activity. From infancy we are bombarded with a barrage of humanist slogans counselling us: “Listen to yourself, be true to yourself, trust yourself, follow your heart, do what feels good.”
I’m just reading a book by Norbert Elias (https://en.wikipedia.org/wiki/Norbert_Elias), who, when writing about his theory of the “civilizing process”, stated exactly the contrary. More exactly, he’s saying that we only listened to our hearts/true self/passions back when we were “un-civilized”, like in the Middle Ages, but after the State monopolized the use of force and the collecting of taxes and after the “société de cour” formed people had to suppress their passions/heart and had to “rationalize” their external actions.
typical article of someone really smart and educated that want to talk about something he doesn’t really understand, but he doesn’t know that.
We’ve had free will debates here on HN before, and the submitted story is usually quickly debunked as pompous and arrogant pseudo-intellectuallism. A flavour of jaded post college nihilism. Fatalistic points of view can jump off a cliff if they find their circumstances so constrained, and without option.