Skip to main content

Northwestern Buffett Institute for Global Affairs

Humanistic Thinking in the Age of Big Data with Christian Madsbjerg

Breaking down silos, trying to help people to work together across boundaries of discipline, profession and culture, is a difficult job and one that Christian Madsbjerg has spent much of his career trying to accomplish with much success. Madsbjerg is the co-founder of the consulting company, ReD Associates, Professor of Applied Humanities at The New School and has just launched a new venture called Lateral Data.

On this episode, Madsbjerg talks with Annelise Riles about diagnosing silo problems and removing them in business, health care and other industries. He also discusses the silos that exist at research universities, which he calls the “mother of all silos.” Madsbjerg, author of Sensemaking: The Power of the Humanities in the Age of the Algorithm, also talks about artificial intelligence, algorithms and the need for an infusion of humanistic approaches into algorithms or as an alternative to algorithms.

I think the way we organize the humanities and social sciences has to change, and it's not that it has to be organized the way it's organized today. It was different, you know, 300 years ago. It doesn't have to be this way. It's just made of people that weren't much smarter than you and I. And it became cemented over time, and I think we can break it by showing other ways as more helpful.”

-- Christian Madsbjerg

Christian Madsbjerg

Background reading:

  • In this “Talks at Google” video, Madsbjerg discusses a new type of analytics based on the human sciences that can help companies reveal the true motivations and needs of their customers. 
  • Watch the Institute for New Economic Thinking’s video with Madsbjerg about the future of learning.

Subscribe to Breaking Boundaries wherever you listen to podcasts, so you never miss an episode:

Apple: https://apple.co/3FIXuqS
Spotify: https://spoti.fi/2Xr04k4
Google Podcasts: https://bit.ly/3G03749
Amazon: https://amzn.to/3lTQYWA
Stitcher: https://bit.ly/3G1RM3F

spotify-podcast-badge-blk-wht-165x40.png en_google_podcasts_badge_2x.png us_uk_apple_podcasts_listen_badge_rgb.svg us_listenon_amazonmusic_button_white_rgb_5x.png stitcher.jpg

Read the transcript of this show below

Annelise Riles [00:00:03] Welcome to the Breaking Boundaries podcast. I'm Annelise Riles, executive director of Northwestern University's Roberta Buffett Institute for Global Affairs. The Northwestern Buffett Institute is dedicated to breaking through traditional silos of expertize geography, culture and language to surface novel solutions to pressing global challenges. Today's guest, Christian Madsbjerg, is a true boundary breaker par excellence, single handedly demonstrated what can be achieved when the best insights and methods from across the disciplines, from the humanities to the pure sciences are brought together to address intractable problems. He's the co-founder of one of the world's most innovative and successful consulting companies, Red Associates, and he's a professor of applied humanities at the New School in New York and the author of Sensemaking the Power of the Humanities in the Age of the Algorithm, Christian has also just launched a new venture called Lateral Data. Welcome Christian, so glad to have you here.

Christian Madsbjerg [00:01:13] What a pleasure.

Annelise Riles [00:01:14] I want to begin with Red Associates, the consulting firm you founded many years ago, to unlock the power of humanistic thinking in the age of big data. So tell us a little bit about what your vision was and what were some of the successes and challenges you faced along the way?

Christian Madsbjerg [00:01:30] Most innovation doesn't come out of vision. It comes out of disappointment. So it comes out of being annoyed or unhappy about something. And I think that's for commercial innovation, social innovation. Any kind of innovation often comes from sort of a state of disappointment, at least for me. And maybe that's because I'm from Scandinavia where that's our sort of standard mode of operation. But it started with thinking that I would be a professor and always being told that I would be one and then seeing it as a student. And just seeing how unhappy people seem to be. And that didn't feel like a place that I would thrive in. And there was nothing wrong with being a professor and there was nothing wrong with the place really of the places I went to. It was more just it didn't feel right. And then I had to figure out a way to make a living that wasn't there. And, you know, I could have gone into media and sort of right, which I did for a little while. But in the end, I just thought, OK, I'll do my own thing here, and maybe it's because I'm weird, or maybe it's not, but I'm going to try. And I was very young and early 20s and sort of arrogant enough to think that I could contribute. The beginning was really being fascinated with decision making, like how are really large organizations making decisions? And I had this assumption that turned out to, I think, hardly be true was that most of the decision making is made on technological or technical terms, what can be made and also financial. So what would be lucrative or meaningful to make from a financial perspective? And I just thought, maybe there's a human side to this as well. Maybe the social is also a relevant factor when you make medicine or if you run a big organization or if you sell sports shoes, that health and healing as a human thing or running as an activity or as a phenomenon, your relationship to your body seem to be an irrelevant thing. So that's that was the start was just could that be helpful? And I thought the theory that I studied, I studied philosophy mostly, and I thought the theory is that I learned about in philosophy seems so obviously helpful. Yet when I then arrived in a boardroom, not, you know, with a group of people in a company primarily, they just hadn't heard of it. So all it was really introducing the world of philosophy. Or I don't know if college philosophy because philosophy is boring today. Academic philosophy is boring, but sort of a mix of philosophy and anthropology seemed to be helpful. So it wasn't a vision. It was more disappointment, a need for making a living.

Annelise Riles [00:04:17] So Christian, you believe more deeply that anyone I know in the power of the humanities and you've just articulated that power so eloquently and at the same time, you know, the limitations of universities so well. And I remember you once told me that the mother of all silos is the research university. So what is it about this particular organizational form that seems so challenging for the kind of work that you're trying to do? What is it we need to do about it and how can we change the university? What do you think?

Christian Madsbjerg [00:04:47] One thing is that diagnosing the problem and another one is how to change it. And and the first one is easier than the second one. So I don't know. I just feel universities are so weird. The way we organize knowledge is just so baffling. And when you go into a. University and spent time on the on the hallways. You see that there are these sort of identities where a group of people gather around a banner of, let's say, sociology and then write in their own language to themselves, for themselves in this sort of weird career path of ever more specialized, ever more boring kind of approach. And I think in the natural sciences, that makes a lot of sense that you specialize in a particular kind of area of the human biome or something like that and get really specific and really precise about this area of study you have. And I think that's very productive. But I think in the social sciences and in the humanities, this idea of separating thinking about humans, our history and our songs and our stories and the way we organize ourselves politically into specific silos with its own sort of song and dance seems frankly ridiculous because I think economists are studying the same thing as sociologists and the same thing as anthropologists, and they somehow find themselves in a situation where they have to get into five to five top reviews and they have to get an amount of papers into that in order to succeed and have a meaningful career. And they they don't talk to each other, and it's just not very productive. If you want to be helpful, I think especially economics is just ridiculous how they have their own language, speak to themselves in a way that nobody else would be able to. And so it is the way we organize knowledge. The way we teach is fundamentally siloed with a phenomenon that isn't siloed. So it's very, very strange and I think unhelpful.

Annelise Riles [00:06:53] Now, as you rightly point out, the more difficult question. What do we do about it?

Christian Madsbjerg [00:06:58] Well, it's got to go.The way we organize knowledge today in these ever more specialized career paths. In conferences that are ever more specialized, with great hatred to other types of language or other ways of approaching the world, and animosity between groups that really have the same interest and the same sort of a scholarship in many ways that's got to go. And it's not just in the humanities, it's also if you want to change anything in the world, you can't just stay in sociology or something like that. You have to work with designers or engineers or, you know, people that can make things in order to or finance right to understand how things are organized financially without an ability to move around in different areas and have them work together. You won't change anything, at least not in the short term. So I think there is this need for an ability that cuts across and many people are trying to do something about it. It's often called interdisciplinary or transdisciplinary and so on, and I'm always very scared when something is called interdisciplinary because it means often that there is not discipline. But I think the way we organize the humanities and social sciences has to change, and it's not that it has to be organized the way it's organized today. It was different, you know, 300 years ago. It doesn't have to be this way. It's just made of people that weren't much smarter than you and I. And it became cement over time, and I think we can break it by showing other ways as more helpful.

Annelise Riles [00:08:35] If you were to think five years ahead to where we would hope to be, what kinds of new institutional structures would you hope will emerge on the landscape between the university and the corporate sector in these spaces? How would you structure it if you could just wave a magic wand and create anything you want? What do we need to do here?

Christian Madsbjerg [00:08:52] So I'm part of a project that's going on that's called Transformation of the Human in California and at least some of the ideas there are. I think part of the picture and it is studying technology with a big T.. So both in the life sciences and in, you know, Silicon Valley as technology both and if you think about it that way. But from a philosophical angle, like studying technology from a philosophical angle and then have artists involved to help experiment and express and experiment with what these ideas mean and what these new technologies might mean. So in the labs, not outside the labs, in the language of creating rather than criticizing, or you can criticize if that's necessary. But the basic modality of the basic way of working is trying to chip in rather than stand outside and be critical. That project is taking on students now for the first time and is embedded in seven or eight important. Companies and labs, and I think that's a sort of a minimum viable product, like a an early version of something that I believe in and could be exciting. Structurally, it's hard. I mean, you're dealing with one of the enormous most important pillars of society, which is the research university and the way that that has sort of developed into the monster it is today. And that's hard. But I think suggesting new things is not that hard. I think you can convince people, companies, organizations, labs science labs to involve you if you just argue for it. I found that it's not 100 percent of the times, but most of the time you can get people involved in the idea because it just so obviously important and you can even get them to let you in and be part of the project. And I think also you can by doing that, you can suggest ways in which it could work. And that's at least what we try to do with RedAssociates was to suggest a way. And thankfully, a lot of people have taken that ideas and copied to lots of companies that came out of that way of working. And it's very flattering that people sort of either were inspired by, you know, we're inspired by the ideas. So it's easy enough actually to engage with these areas if you just have to, if you just think it relevant yourself in the first place and find what you're studying to be of importance and helpful and and interesting to other people. So I haven't found that much friction or pushback. I think engineers are just excited that somebody wants to think about where robots are going and where, what it means that we have open lectures for how money moves around or how things move around the world. It's so obviously interesting and and engineers and scientists and business people and are easily excited about it. So I think building something that works on a small scale and then make it really big after that is the model.

Annelise Riles [00:12:09] So let's talk a moment about algorithms. You and I are speaking today on a day in which a whistleblower is testifying before Congress about issues at Facebook. And you've written a lot about AI and the need for an infusion of humanistic approaches into algorithms or as an alternative to algorithms. What do you think about this public debate that's unfolding? What would be your thinking about what needs to happen at this moment?

Christian Madsbjerg [00:12:34] The word AI is annoying, right? Because it's neither artificial nor intelligent, and it's it is a statistical model for a small slice of life, and it has nothing to do with generalized intelligence, has nothing to do with how we experience the world and how meaningful worlds are that we live in the midst of or in the middle of meaningful worlds. And there's this idea that statistical models or machine learning models is the same as humans and how we experience the world, which is bonkers. But nonetheless, that's the sales pitch, and it has been for a long time. It's been called different things. It's sometimes it's called AI. Sometimes it's called big data. Sometimes it's called deep learning. It's the same thing from a human perspective. And I think that's important to know, and it annoys me every time I hear that a machine is smarter than I am. Of course it is. I mean, a calculator is smarter than I am at calculating fun, but not at cooking. That's the first thing. The second thing is that, of course, Facebook is of importance. You know, whether you call it A.I. or not, what they do is organizing almost three billion people. It is a way that friends connect, stay in touch, meet each other. It's of great importance. But when Facebook was created, there was no one there. There was an empty seats, right? To help inform what's a friend? You know what is friendship like? What's the social and sort of human aspects of this? I think they wish they had that. And I think I know that they wished they had that in the beginning, because lots of things could have been different if in the creative process of creating something as enormous and an impressive as Facebook were. People that think about humans in a sophisticated way and not just as A.I..

Annelise Riles [00:14:30] There are these assumed narratives about how something like friendship works that are underlying the supposedly smart technology, and those are actually very, very simple and wrong.

Christian Madsbjerg [00:14:42] And these are important things like friendship, money, healing, courtship and love. Those are things we've studied for a long time and know a lot about and and so on. So what makes life meaningful, meaningful thing to be part of? And there was none of that when they built these things, and I think it's not that it should have been stopped. It's not that someone should have raised their hand and say, Let's not do this. It's that there should have been a creative process of people with the kind of background that I know you have helping and being part of the process rather than sitting outside and saying it doesn't, you know, that won't be a good idea.

Annelise Riles [00:15:23] So tell me about your new venture lateral data. What is this about? What are you trying to achieve?

Christian Madsbjerg [00:15:29] Well, again, it comes from annoyance and disappointment. So I've studied the American health care system for 15 years, and in Red Associates, we did hundreds of studies on understanding patients and their interaction with the with nurses and payers and so on hospitals. And I saw and it's been a pain for me for 15 years that it is just there are silo problems. Again, just like the university has these silo problems, the health care system has the same type and you see that you see people having diabetes, but also depression and also heart problems, and also treating cancer from 10 years ago and not and having specialists in each of these silos that doesn't listen to each other. So the medical treatment, the data set up, the physical data set up, the servers that all the information sits on is sitting in silos and it just hurts people. They can't find their way around. They get a cocktail of medicine that's not coordinated between doctors and, you know, just misery. And I've just been looking at that as an observer for a long time with some intensity they do. And even if you are very advanced trying to organize, let's say, your mother's treatment. If she's got co-morbidities, it's very difficult and you have to. So the idea is that and I've been frustrated and horrified by that for a long time, and then I've met technology people that have told me. And that's why I'm doing this with that have told me that you can now, because of the cost of cloud storage and computing processing, you can lift all the data up and virtualized it into one system so you can take a hospital system and you can take all the data that sits in different systems and different servers and different language, and you can virtualized it into one system. And that is that has a lot of privacy issues related to it, and it has a lot of technical issues, but it's possible now. And if you can do that. And instead of what Google would do, which is extract all the data into one big Google system, you keep it in the hospitals and you keep it with the patient. You can then have each doctor or each physician or each person that interacts with a patient see what the others are doing and you can have them. You can have a patient, centralize the information about treatment or treatments and show that to your physician that you are also doing these other things that you don't understand yourself, but others would. So in that way, by a technical move, you can get rid of the data silos in the American healthcare system, and that's the project. So it is technically building it and the kinds of I mean, because the American healthcare system is so sick, it is, I mean, twice as expensive, worse outcome that in the European system, right? It's 70 percent, 17 percent of our GDP in North America or in the U.S., rather than eight percent in France with worse outcomes and not for everyone. By doing this, we can minimize cost, we can get an overview, we can get better treatment. We can do so many things just by changing the way data is stored and organized. So it's a technical thing, but it comes from a very human observation of looking at patients and their confusion and misery and and sort of just enormous, ridiculous cost of that goes into it, which is insulting to someone like me. I just get insulted by those kinds of things. So that's the move. It's the same as the university, right? It's diagnosing silo problems and see if you can get rid of them and then you can't do that in university. But in this case, you can actually do it technically, which is exciting.

Annelise Riles [00:19:33] Let me finish by asking you the question. I always ask everyone, which is as you think about the future now. What are you most worried about and what are you most hopeful for?

Christian Madsbjerg [00:19:45] I'm worried about our relationship to nature. I'm worried about complete disintegration of societies because of financial differences between groups. But I'm mostly worried about technology, really. I'm worried about the kind of things that we will unleash in the next 10 years. So not the things we know now, but the things that are early stage in the pipelines of big pharmaceutical companies and hedge funds and technology firms. Not understanding that before we release it, which is what we do right now, I think worries me a lot. But what I am hopeful is that there's a way to do it, and there are so hundred thousand people that have been trained in universities to do just that. If only they could find a way of engaging. So we have trained a lot of great people in the social sciences and the humanities over there that are here. That is the scale that's already there. How to get the engagement with technology is sort of the problem that we need to to deal with, I think. But I think I'm very hopeful that of the people that are out there, that could be helpful if only they could find a way to engage.

Annelise Riles [00:20:52] You are an inspiration to me and I know to many others, and I am really excited to see what you achieve next. Thank you. Thank you. For more information on this episode and on the Northwestern Buffett Institute for Global Affairs, visit us at. buffett.northwestern.edu.