Intelligent Robots for Manufacturing Automation (CxOTalk #361)

Intelligent Robots for Manufacturing Automation (CxOTalk #361)


Today, we are speaking about robotics, advanced
robotics, state-of-the-art in autonomous robotic systems. Fredrik Bruhn, who is the CEO of Unibap and
Dr. David Bray, who is the executive director of People-Centered Internet. He’s my guest cohost and subject matter expert
this week. Fredrik, how are you? Welcome to CXOTalk. Hello. Good day. It’s good to be here. We’re calling in from Sweden. We’re grateful to join you. I see you have a friend back there behind
you. I sure do. This is the virtual impression of Batu [Akan],
which is a robot that is actually doing autonomous assembly and different sequences for smart
factories. I can’t wait to meet Batu [Akan] in more detail. David Bray, how are you? Welcome back to CXOTalk. Doing great, Michael. Thanks for having me on CXOTalk. I’m really excited to hear more about what
Fredrik is doing with Unibap. Incidentally, we’re both Eisenhower fellows
as well. That’s how we actually originally met when
Fredrik was part of his exchange coming to the United States and I was going overseas. This should be a really great conversation
about robots, computer vision, and automation. I’m excited. Let’s begin. Fredrik, I think we should start by asking
you to tell us about Unibap. Unibap is a company in two different areas. We’re doing smart industry automation and
we’re providing space cloud computing infrastructure for spacecraft. You may ask what does space and smart industry
do together. What’s interesting is that they are very closely
related and connected because, if you have a smart factory where you provide in-line
AI that has to work like a human 24/7, you cannot have any standstills, which means that
you need to do a lot of remote maintenance. You need to have very reliable systems and
uptimes. There is the connection over to space because,
in space, you can obviously not reach your product and do real maintenance on it. You have to do it remotely. The uptimes are very similar as well. David Bray, tell us about your work. The People-Centered Internet, we do demonstration
projects that measurably improve people’s lives and livelihoods. Vint Cerf is our Chair. Mei Lin Fung is our cofounder. Projects include, for example, helping groups
that have not gotten Internet access; find a way that’s affordable and accessible for
them. Also, thinking about what we can do to counter
misinformation or disinformation and thinking through the future of work that includes AI
and automation and things like Fredrik is doing with Unibap. Fredrik, what are the fundamental problems
that you’re trying to solve? What we are actually creating at Unibap is
virtual operators where we train robots to have the same knowledge and understanding
as a human. When we put our systems into smart factories,
they are basically a high school engineer taught to know what assembly, grinding, drilling,
or painting coating quality assurance is like. Then, when they work inside the factory, they
learn. After about six months to a year, you have
a master’s degree. About a year and a half later, you have a
Ph.D. level. What we fundamentally changed is the way of
production so that you really have virtual operators that can allow you to have higher
production rates and a lot more control of your processes. How is that different from what we have in
factories today? Just in general, in factories, isn’t there
a push to lower the skill requirement for people working in the operation? That is absolutely true. The big difference is that AI that we deploy
inside of factories allow you to have a truly virtual operator that behaves like a human. What you see in factories today is a lot of
static automation where you have certain software that repeatedly does the same thing but they
cannot compensate the same way that a human would do to errors or to changes. Typically, you would also have a jig that
allows you to do the same thing for a large batch. In our case, when the system understands the
reality like a human, it can actually go down to one-part-manufacturing because it can change
from every single step from second-to-second. I guess, Fredrik, is this now possible because
of computational cycles? Is this now possible because of advances in
computer vision? Why now and what do you see as sort of like
the secret to why this is possible to have machines that learn next to a factory worker? There are many different reasons why this
is appearing now. One of the most critical aspects is the computational
performance. It hasn’t been there. But as many viewers know, AI hasn’t been new. It’s been around for about 50, 60 years. Now, when the computational performance is
getting really good and we’re also being able to harness the power of deep neural networks,
which really changes the way you can do AI in practical applications, that’s really an
enabler. Also, if you look at the research environment,
there has been significant progress over the last five years on algorithms that allow us
to, much faster, train systems together with humans and, in certain cases, cut the human
out of the loop completely by training on only known good parts. Would you say that the problem you’re solving,
does it fall mostly on the robotics and the hardware side or is the balance on the AI
and the software and the data side? That’s a really interesting question because
we are at the turning point where humans and machines start to operate a lot closer. I would say it’s about 40% hardware and about
60% software. The software part of our business and the
realization of this is only increasing every three, four months. We’re seeing that we’re turning more and more
into a data-driven software company than hardware, but because of the reliability issues that
I talked about. If you’re in a factory, you really have to
work all the time. You cannot have any downtimes. It’s very important that you work with the
hardware and software together because they have to be very closely matched. You also have to change the software a little
bit to include things for safety and reliability that typically wouldn’t be inside a standard
package from AVS, for instance. Fredrik, what are you finding, as people work
next to the robot and help train it? Are you finding that they appreciate it? Are you finding that they are surprised? What responses are you finding from workers
working next to a robot? We have customers that represent both ends
of the spectrum, I would say. We have customers that have already taught
and already, internally, in the company, created a digital transformation agenda where this
has been discussed for many years and they started preparing their employees for this
change. We have other customers that haven’t started
this journey yet but are exploring this. We’re seeing everything from really helpful
people saying that I’m really happy that this dull, dangerous, and dirty work can be automated
so that I can participate and do something safer, better for me, to the other spectrum
where people are saying, “Okay, I’m going to sabotage this because I’d like to keep
my job the way it is.” It’s a complex set of problems because you’ve
got hardware, software, data, and then human interactions, which include the human attitude. You’re kind of in this intersection of a lot
of complexity. Absolutely, and that’s our reality. If you look at the skillset of our company,
it’s truly amazing because we have an interdisciplinary workforce here that is just amazing. We have attracted the best talent we can find
in this country and possibly even worldwide. Are you finding as well; is there appreciation
for the fact that maybe the robot can do things that a human simply can’t do? Can it see things or can it see imperfections
that a human eye can’t see? Can it lift things that would just be too
heavy for a human to do? Is there any appreciation for that? Absolutely, and that goes back to the dangerous
part of the workforce where you have jobs that are really stressing on you. A system like this will do the same thing
over and over and over and they will obviously see things the same way, even if a human would
be tired and start to miss certain things. A system like this doesn’t do that. It stays the same all the time. If you look at the different jobs that are
being automated first, those are the three Ds, the dull, dangerous, and dirty work, so
those are typically related to mining. It’s heavy machinery. It’s painting where you have a lot of bad
environments for your health. Those are the first ones that we automate
and most people that we meet are really happy that this is getting done because then you
can move on to a better job. What are the significant challenges or the
hardest challenges that you face as you’re dealing with these various systems? Those are really different depending on the
area. Some of the really big problems that are common
between the areas that we operate in are trust. To gain the trust of the customers that these
systems will behave as good or better than humans over time. That’s really a big threshold for the entire
market to break through. If you take all the other parts and you look
at hardware or software, all of that can be discussed. You can show with real numbers that it’s really
robust and reliable. But when it gets to trust, that’s more driven
by the appearance, the feelings of people. To have good trust in a system means that
everyone has to see it live for a time period to really believe in it. In some respects, you’re saying it’s not just
that the machine is learning from the humans but then the human organization is watching
and learning whether or not they are willing to trust that the robot can be reliable. Absolutely. That’s exactly what I’m saying. A good analog is our autopilots in commercial
aviation. Not a lot of people know that an Airbus 8380
from Frankfurt to San Francisco is only human-operated for about 5 minutes out of a 13-hour flight. The rest is autonomous, but we still like
to see that pilot in the front, but the pilot doesn’t do much today. Right. It goes back to trust. Fredrik, [discuss] where all of this fits
into manufacturing and compare and contrast what’s going on today with what you’re making
possible. What we’re making possible is the ability
to increase production rates because when you have a human or a virtual operator that
operates like a human, we can actually scale that speed of operation because we can invoke
very large computers and run these systems very fast, much faster than a human can reason. The other aspect is if you digitize everything. In our case, when we do manufacturing where
we can do a real-time inspection of 100% of the parts, you can really optimize your yield. You can optimize your entire production flow
and we can create data that can be congregated on a higher fabric level so you can compare
your different manufacturing lines, your different manufacturing sites. You can push, really, manufacturing between
one site, for instance, in the U.S. and one site in Sweden, with a click of a button. You click a button and then you move your
manufacturing around globally and you have the same quality; you have the same virtual
operator regardless of where they are. You get a much better standardization than
you do with humans. Fredrik, I know when we were talking in the
past that really underscored the importance of what you were doing is that of safety. You were talking about certain parts that
have to be manufactured to be able to withstand certain stresses or certain strains. If they don’t meet that quality, it can actually
result in something catastrophic or explosive. There’s an element of increased assured safety,
would you say, with the results of what you’re doing with robots? Absolutely. The different operations that typically are
done by robots, like welding, assembly, grinding, drilling, and things like that, mostly come
with a lot of quality inspection that is, today, done by humans. When you can start to combine these different
elements, you get a much higher quality overall because these systems will do the same thing
every time. That really means that if you have a high-value
production, you create parts that are safety-critical or they are of high value, this is tremendously
important because, from this, you have the same quality all the time. If you have any manufacturing errors, they
can be quickly detected and removed from the production line. We have some really good, big volume customers
that we have shown to be able to lower the quality losses by 13%, 14%. Maybe if we could talk a little bit more about
Batu [Akan] or the robot behind you, tell us a little bit more how it can sort of deal
with things that it’s never seen before or deal with parts that it’s never assembled
a specific way before but, using computer vision, can actually let you know if it’s
actually possible to build something or not. I will go into a little bit of explanation
here and then I hope we will be able to run the robot live for you guys so you can see
what I’m talking about. What we are doing is that the robot behind
me, which is a Universal Robot, is actually just like the arms and the legs of a human. At the tip of this one, you have one of our
Intelligent Vision Systems. That’s really the core of intelligence. We sit over the robotic control system. What we do is that, given a number of CAD
models and a task description, we ask this robot to please do an assembly sequence. We taught the system the meaning of pick and
placing, for instance. Now, with a few CAD models and a task list
to assemble a stack, we can ask the robot digitally to do that for us. The robot will look around and see, do I have
the parts that should match whatever that comes from the CAD? Then it looks at the task list and says, “Okay,
you want me to do pick and place. Ah, that’s great because I understand the
meaning of pick and place.” That means that this system, in real-time,
can start to reason about the task and the object. It will actually generate the code needed
to do this in real-time. The big application in this in industry is
that you can shorten the time from design to manufacturing. That’s really why you can go down all the
way to one-part-manufacturing because you can change the drawing all the time. If you have a different assembly sequence
for different customers and they order one of something, that really doesn’t matter to
the system because it will figure out the code to do that when it’s needed. That’s truly a big change because, previously,
industrial robotics has been about programming, large volumes, jigs, always trying to keep
everything the same. If you need to do reprogramming, you do that
in a safe environment. In this case, I will stand next to the robot
as it reasons on the code needed to do what we’re going to ask it to do. That’s really a tremendous shift. When you say reasons on the code and develop
the code, can you tell us what specifically you’re talking about? What I’m talking about is an industrial robot
taking commands for left, right, up, down, grab, release, and things like that. The robot doesn’t know that code sequence
behind me right now so, when we give it a task to assemble something, it will reason
about that code and generate the code commands that need to go to the industrial robot controller
to move it around the way we want it to move. By analogy, because I right now have a two-and-a-half-year-old,
it’s sort of like when you ask a two-year-old or a three-year-old to pick something up. You’re not giving specific instructions, but
it’s looking where the objects are. It’s looking at the orientation. A child is reasoning on how to pick it up. What you’re saying is, similarly here, unlike
conventional factories where you have to explicitly tell them, “Go up, down,” to pick it up, it’s
reasoning on its own how to go about doing it. Then if you told it to put it in another place,
how to then put it in that place, which does raise an interesting future, which is, we
could see in the future, for example, for buildings where you need to finish the walls,
the ceilings, the floors with different fixtures, different tiles, different flooring, different
windows. If you had CAD images of each of the different
floors of that building and you had a pallet with the different supplies available and
a series of these robots that could recharge themselves when they needed to, you could,
in theory, tell the robot, “Go forth and actually finish each of the floors of this building
for me.” As long as it had the CAD images and it had
the parts, it would work 24/7 to finish that. Absolutely. That’s a great example. That’s something that is actually being prototyped
here in Sweden with construction companies now. That’s not too far away in the future, actually,
to be able to do that. Do you want to show us your robot friend? Absolutely. If the viewers can live with a little bit
of background noise, we will run a demo for you here live now. We’re going to ask this robot now to start
and to assemble a stack of blocks. It’s difficult to see in the live video but
you can actually see some differences in the blocks. The top block will say Intel because we have
an Intel cooperation with it, and it’s only one of these four blocks that have that. With a little bit of magic, now the demo should
start. As it moves around in the background, we can
continue the discussion. This system now has started to reason about
the task it’s been given. It knows the shape from the CAD of the blocks
that you see here on the table. It has an understanding of picking and placing. Given the task that it should take three of
these blocks and put them together in a stack, it’s now reasoning about, “Okay, what’s the
size of the first block? What’s the size of the second block? What’s the size of the third block? How do you want me to stack them? Oh, you want them to be stacked this way. Okay. If you want it to be this way, I have to move
around and perform the operations that way.” What’s interesting here is that you see no
jig because this is assembled, picked and assembled on a standard desk. As long as you can mount a robot arm on it,
we can use whatever desk or surface area you have as the factory. What you see now is that the robot has assembled
and arch. Maybe you’re able to read that it says Intel
on top of it. In the CAD model, which the viewers can’t
see, it’s actually being asked to put that block at the top. Now, it will begin a sequence to disassemble
this as well. What’s interesting here is that it’s just
a desk and a robot and a Vision System connected to it. Then you go directly from your CAD models. I think what’s astounding to me is, yeah,
there’s nothing. Like you said, there’s no jig. There’s no grid pattern or anything like that. Again, you literally just put the robot on
the desk and you gave it how you wanted it to build it and it did. There’s nothing else giving it any sort of
spatial orientation or anything like that. That’s quite impressive. In contrast to traditional robotics where
you have fixed programmed pathways. Correct. Now it’s taking this stack apart again, but
this is just showing the skilled module or the virtual operator called pick and place. What you can do now is that you can take your
entire toolbox with quality assurance, with drilling, with, in this case, pick and place,
and assembly. You can take that entire toolbox and put that
into all sorts of various combinations. What you’re seeing here, this is done with
a Universal Robot. What we have done is that we have demonstrated–together
with a Universal Robot and also ABB, KUKA, FANUC, and Yaskawa from Japan–we can really
do this with almost any industrial robot that you can find on the market today. The ultimate flexibility, so it’s not just
an issue of cost savings but, really, you’re talking about manufacturing agility. Correct. That’s really the beauty because now you’re
agnostic to the volume you have. You’re agnostic to a lot of the design requirements
that you have today because this system behind me will reason like a trained high-school
engineer. You just need to pick together the various
virtual operating modules that you have an interest in. To take the analogy even further, what you’re
saying is we now have the ability, if you’re shipping something, say it’s circuit boards,
each circuit board would be custom-tailored to whatever requirements someone had or even
something like teddy bears. Each customer could customize what the teddy
bear is carrying, what the nose is or something like that, as the robot does it. That doesn’t impact your scale because the
robot is able to handle a task on demand for customization. Absolutely. You nailed it on the head. That’s exactly what this is about. What kind of skill is required to program
the robot and how does that compare to the setup of traditional robots? What’s interesting here is that anyone with
a little bit of CAD design capabilities can program this robot. Actually, anyone can program this robot because
it’s not being programmed at all. It generates its own code when it’s needed. Given the operating modules that we have taught
the system, you can take a CAD sequence directly to an operation like this. As long as someone has made those CAD drawings,
in this case, and created that sequence of assembly, for instance, then the system will
figure it out itself. The actual code that runs the robot is being
reasoned. Because we started off the conversation talking
about you also do things in space, I imagine this is clearly a future application for space
where you won’t be able to tell the robot everything you want to do, partly because
the mission parameters are so large, but then also, too, there’s communication lag. I imagine the future is especially in space
and space exploration. We send robots like that out possibly to mine
asteroids, possibly to do research or something like that. You give the high-level objectives as a human
but, beyond that, the robot is left on its own to reason about how to collect the ore
sample or to collect that sample from the soil or whatever. Absolutely. One of the big applications in space is robotics. You’re going to the moon to do mineral excavation,
for instance. You were talking about building Mars bases,
and those are going to be highly autonomous missions where you have all sorts of robotic
vehicles cooperating. For us, it’s really the same. The space infrastructure that we have created,
the radiation-hardened boards that run x86 code is actually running the same code that
you see behind me. We can reuse the entire software infrastructure. Now, you can really start to take autonomy
to another level in space based on industrial existing code rather than develop new code
all the time for exotic architectures that are currently in space. The robotic hardware, I’m assuming, the arms
and the movements are the same as other off-the-shelf robots, so-to-speak, or are there special
complexities and special features that you’ve built-in? That’s an interesting question. What you see here is one of the standard industrial
robot arms that you can buy. What we do is that we interpret our code and
converge that into the protocol that this robot is talking. But if you’re going to have the level of reliability
that we are requiring in our applications, you have to do a bit of tailoring on the hardware
also. That’s, again, a combination of space and
industry where we take design rules from autonomous systems in space and apply it for industrial
applications. In this particular case, this Vision System
is built with space technology to provide a level of reliability that you don’t have
in a commercial Vision System on Earth. One of the things that we were talking about
a little bit earlier, Fredrik, was how this does require you to have computing at the
edge that you’re dealing with several teraflops of computing power actually on the robot itself
to do the imaging, to do the awareness of how to put things together, and the reasoning. You can’t do this remotely, say, in a distant
cloud or something like that. You have to have computing at the edge as
well. Yes, that’s correct. Actually, in this particular demo, the computing
task is divided between the robot and an edge node, which sits under the table here. But if we would have a cloud connection, we
would have multiple problems. We would have latency, which would be a big
problem because this is a safety-critical robot. If this robot goes haywire, it could kill
me. That’s one important aspect. The other aspect is the uptime that we talked
about before. If you’re in a critical line at the production
site, that one is never allowed to stop, so you can imagine what happens if the 5G network
or 4G network or your fiber connection to your cloud provider goes down. That means that your entire factory is standing
still. What we have to do is partition our services
that we have a high degree of autonomy localized in the factories and we can provide additional
services on top of the cloud, but those cannot be part of the critical loop. What is your business model? There isn’t one business model that fits all,
but the business model that we tend to drive for is data-driven. You would come in and ask us to provide a
service. We would partner with an integrator that provides
the robot and the infrastructure. Then you would subscribe to our skill modules
to provide the service. If you want to do assembly, you want to do
quality assurance, you want to do drilling, you want to do a little bit of pick and place,
you would subscribe to us for those modules. Then those modules would run locally in your
infrastructure. You would pay a tick or a royalty per produced
unit or a monthly license fee, quite similar to Spotify, YouTube Premium, or anything like
that. We have a couple of questions from Twitter
and one hits on a point very much as you just described. Arsalan Khan says, “In the future of work
with robotics, should we create residual income streams for the people who create these systems? For example, actors get paid for reruns of
their shows.” It’s interesting. In Sweden, we have a very big forest industry. It’s primarily in the north and there not
that many people living in the north. What happens is that we are harvesting the
forest in the north and we are paying the taxes to this major region of Stockholm where
we spend most of the money, so there is already a discussion in Sweden being had if we should
allow more of the exports to stay where you have the raw material. In this case, in Sweden, that means that more
of the money from forestry would actually stay up north in those regions. Very similar to robotics like this, if you
have a data-driven module where you subscribe to various modules when you need them, you
could actually go in and tax that. I think what you’re really getting to there,
Fredrik, and I concur, it was a good example of thinking about robotic software as a service,
robotic data as a service, and you could also imagine robotic hardware, the people that
will maintain it. It may very well be much like how we’ve seen
several companies where they don’t actually own the assets themselves but, instead, they’re
brokering the connections. There may be companies that broker the connections
between the people that have produced the hardware, produced the software that’s now
being used and will be updated as it’s needed, and the people that are helping to maintain
the systems. That same sort of thing, it may very well
be the future of the factory is, you don’t actually own the hardware that’s there but,
instead, you are brokering the context in which the influence of hardware, software,
data, and then the people on your factory floor come together. Absolutely. One other question as we were talking about
with the edge computing. It seems like this also points to a future
in which, in space, right now we send a lot of data down, especially of images or things
that we pick up from space. When you have that computing power available
in space, it sounds like you could actually teach the machine or teach the satellite what
things that you actually care about and what things you don’t care about. It does need to consume that bandwidth to
send back things that you’re not interested in but it can only send the things that you’re
really interested in coming back based on what you’ve taught the machine at the satellite’s
edge as opposed to on the ground stations. Is that something that you’re also working
with computing? That is one of the major applications of our
space infrastructure to do that, especially if you combine it with reinforcement learning
so that the systems can actually learn in route, and in situ, what to look for even
though you don’t have to tell it from the ground how to do that. You can just imagine if you’re at Mars or
even at Jupiter and you have a 20K bit line back home to Sweden and you have sensors generating
16 gigabits per second. How do you get those 16 gigabits per second
through a 20K bit line? Well, obviously, you don’t, so you have to
do a lot of in-orbit processing. This has been done for many years, but it’s
been mostly about compression and taking out pixels from images. In our case, a system around Mars or even
on Mars can reason about how to assemble a hotel, for instance, or a storage room. It can reason about where to find water and
things like that in real-time. We have some more questions from Twitter. Ginny Hamilton wants–and she’s not joking–a
robot that can fold laundry. The business model can be something like Peloton
where, essentially, it’s rented. Why can’t you do that? Why haven’t you done that? Actually, there are people doing that. I don’t know if you’ve seen the pancakes robot,
but you can actually buy it. I think they’re even available on Amazon. You can go out and buy a robot today that
will flip pancakes for you. These guys are starting to appear and I’m
quite sure that there are laundry robots available, maybe for big hotels so far and not for the
consumer market, but they are starting to get there. The reason why we are not doing that is that
we are targeting the manufacturing industry because we cannot be everywhere. We are a fairly small company so far. We are laser-focused on smart industry and
space, but this is definitely a market that will explode going forward with all sorts
of applications. What Ginny’s question made me think of, Fredrik
and Michael, is it could also be where you don’t actually have to buy the robot to do
it. If this robot is on wheels, you could have
an app and say, “I want–” and it could be dynamic pricing, “I want to schedule a robot
to come to my house to fold the following laundry,” and it will come up as it’s available. Maybe it’ll even let itself into your house
if you let it. It will fold your laundry and then, again,
back on wheels, it’ll go to the next house where it actually has the next task to do. It could be a future where you don’t actually
have to buy it for your own house but, given they’re on wheels, they’re going about tasks
to do the laundry folding on demand via an app. Absolutely. This is truly the remarkable thing. Can you believe just 15 years ago, us having
this conversation? Now when we’re having this conversation, it’s
actually going to happen within five, seven, eight years. Right. Agreed, and I would even say, could you believe
we were having this conversation five years ago? I think we will be surprised, like you said,
by how much the world changes five to eight years from now. We have another question from Twitter. Sal Rasa is asking, “Given that you work across
multiple industries, is there some notion of cooperation in helping us understand the
nature of the services that you provide?” It’s a very good question. Yes, there is a tremendous amount of cooperation
between various companies. This is so complex that almost no entity in
the world can do this by themselves. We have to cooperate in the community. We have to cooperate between various business
areas, between different markets even. There is a lot of collaboration in this. You’re creating, essentially, the hardware
platform, which is the robot itself; you have the software platform, which is the AI, the
algorithms; and you are applying this to the specific industries and focus points. Correct me if I’m wrong. The hardware platform would be fairly agnostic,
the same with the algorithms, and so, therefore, it’s the industry expertise embodied in the
data that brings your company the uniqueness to work in these narrow domains that you have
chosen to focus in. Is that accurate? Yes, that is 100% correct. That’s why we really are a data-driven company,
even though you see a robot behind me, because we are instantiating those virtual operators
that are trained together with professionals that have been in industry for many, many
years. When we work with big Swedish companies with
a worldwide presence, we train the operators. Our virtual operators are trained here in
Sweden. That also just suggests, Michael, that really
what’s going to start to happen is, whoever is the first mover into a specific industry
vertical to start actually beginning to curate data on how these machines work will have
an advantage if they get there first, relative to others that come later. It sounds to me that the competitive differentiation
here is not the robotic hardware nor is it the software platform or the algorithms. It’s really the data that’s embodying the
expertise. It’s actually a combination. A lot of it is in the data, but it’s also
the way you set up your infrastructure and your software architecture. Our architecture allows us to have a very
high reliability in line at the edge, and we can connect to the cloud, but we have zero
cloud dependencies, which can guarantee the robustness of our systems. If you look at many of the services that are
out there today, they rely on external, cloud-based services, so they cannot do this. Have you ever had a robot do something that
was surprising to you, something that you were not expecting that you found yourself
surprised with? An interesting position now with one of our
big customers where their operators start to trust in the system now and it’s become
like a game between the human and the machine. They are trying to rig the systems, so we
are doing quality assurance and they are adding pieces of tape to it. They are drilling holes in the parts. They are trying to mess with the system to
see if we can detect that. [Laughter]
That’s really interesting because now we’re at the cross-section where the trust is starting
to be there and it’s like, “Yeah, I’m going to beat this machine.” Well, instead of human’s sabotaging the factory
line, they’re trying to see if they can fool or beat the robot. Yeah. Yes. We have a couple of other questions from Twitter. “This is robotics intelligence as a service. Is that his business model? If so, it’s fabulous.” Is that your business model? Yes. In its ultimate form, that’s what it is. We have another question from Twitter. Arsalan Khan asks a very practical and very
important question. He says, “What are the barriers to adoption
for these dynamic robots into traditional manufacturing?” I would go back and say that it is all about
trust. The procurement officer at a big company who
wants to buy this has to have confidence that they’re buying something that they can rely
on. Can you imagine? We had 150 years of Industrial Revolution
that’s been based on the fact that you’re buying a machine that will do exactly what
you specified it to do. It won’t do anything else and it will wear
down 20% every year until you discard it. Now, you’re buying a system that comes in
as a high-school engineer, does the same as the human, and it continues to learn and improve
throughout its lifetime. Rather than degrading 20%, it’s gaining 20%
of capabilities every year. It’s a tremendous shift in the industry. How do you procure this? How is it even possible to buy a really advanced
piece of manufacturing equipment as an Office 365 license? It’s really changing the way you have to think
about manufacturing. That actually raises an interesting question. Would you charge more for a system that has
been trained more and is, therefore, more savvy to a specific sector versus one that
has no data set and, therefore, is new and novice at its task? Now, that’s a very good question. That’s part of the complexity to these new
business models because now you can actually start to think like that. Instead of lowering the price, you can increase
the price because you’re giving more value to the customer over time. In that case, C-3PO and Star Wars should have
been the most expensive droid ever but, yeah, interesting. [Laughter]
[Laughter] Well, that’s an interesting aspect. [Laughter]
As we finish up, any thoughts on robots replacing humans? My take on that is, we’re going to see a lot
of robots displacing humans. We will also see a new labor market coming
into play as well. The challenge is that, in order to participate
in the new labor market, you need to be reskilled or you need to be trained from the beginning
in a lot of abstract thinking. That’s not for everyone. That’s really part of the challenge. There won’t be any lack of jobs, but there
will be difficulties getting to those new jobs. Agreed. To amplify what Fredrik is saying, I also
think that while there will be obviously some displacement, especially with the roles that
are sort of dangerous or dirty and things like that, there is also going to be the augmentation,
whether it’s in the medical industry. You could imagine that the surgeon will also
have, assisting the surgeon, a robotic arm as well. That robotic arm may be able to do the things
that a surgeon couldn’t do, with more precision, under stress. Mm-hmm. We need to think about both what do we do
about those people that are displaced completely and, as you said, can they be retrained, reskilled,
or are there other things that we need to think about to help with that? Then what do we also do with the people that
will be augmented? They’ll be working with the robot and what
the shift will be in terms of how they relate and how they work together as they move forward. All right. On those provocative questions, our time is
up. I would like to thank Dr. Fredrik Bruhn for
being our guest here on CXOTalk. Fredrik, thank you for being here with us
today. Thank you. I would also like to thank Dr. David Bray
for being our guest cohost on CXOTalk yet again. David, thank you again for joining us. Thanks for having me, Michael. Thank you, Fredrik. Michael, don’t forget to thank the robot too. Okay. Batu [Akan], dude, thank you so much. Everybody, thanks so much for watching. Before you go, please subscribe on YouTube
and hit the subscribe button at the top of our website. We’ll send you great material about upcoming
shows. Thanks so much, everybody. I hope you have a great day. We’ll see you again soon. Take care. Bye-bye.

Danny Hutson

Leave a Reply

Your email address will not be published. Required fields are marked *