5 thoughts on “Shuri’s Remote driving

  1. Based on my past experience with simulator/VR programming I find it very natural that the dust collapses and Shuri drops to the floor. That behaviour is what I’d expect as the default, not something that needed to be added.

    Partly it’s because the developers would be thinking of this as a driving game / simulator, not an actual physical car. Mario Kart or Forza or Gran Turismo don’t need seatbelts or roll cages, so it wouldn’t occur to me that an upscale AR/VR version could be a physical danger to the player.

    Mostly because of how simulation systems work. There’s a story about an early US Navy distributed simulation, where one computer system was running an aircraft carrier, another system the planes parked on the flight deck. The first computer crashed, causing the aircraft carrier to vanish from the virtual world. The second system noticed that there was now nothing holding the planes up, so dropped them all into the ocean. All components working as designed and correctly mdelling the real world, unexpected combination.

    For the remote driving interface, it presumably can handle a variety of vehicles. Some have four wheels, some six. Some have two doors, others four. And the vehicle can change while being operated: sun roofs can be opened, windows lowered, windscreens smashed by debris thrown up.

    So all the individual components of the simulated vehicle would be coded (probably in the common superclass) “if the real vehicle does not have this component, remove yourself” (by collapsing into dust). Handles both initial configuration and any damage occurring during the drive. But nobody thought to add special code “unless a real human is sitting on you”.

    Could be fixed by going through the code for the remote driving interface, and every other simulation, looking for execution paths that could case injury. I’d just put down a few gym mats on the floor.

    • I might could see all this as bad coding except a) Wakanda is the most advanced technological culture on the planet and b) There’s a general AI in the lab who should be able to respond to catastrophic errors in milliseconds.

      • No-one who actually studies AI willing to comment? Oh well, I’ll have to try.

        One problem with an AI that prevents injury would be a sort of moral hazard, in that the people using the simulator never suffer for any of their actions. One aspect of simulation and training and play in general is that there should still be consequences for mistakes, just not as severe. Here Shuri, who is in the age group mostly likely to believe themselves immortal, is at risk of a bruised backside or cracked bone (which Wakanda doctors can easily fix), but not actually dying in the simulated high speed crash. Griot could have decided not to intervene, predicting that a few days discomfort will make Shuri more careful in future when she is behind the wheel of a real car – especially a real car that doesn’t have a guardian AI.

        The other problem would be something like the paperclip maximizer, if I’m reading the summaries right. If Griot is protective of Shuri, or Jarvis of Tony Stark for another example, neither human will get much done. Shuri and Tony both do R & D, which almost by definition has new and unanticipated consequences. If Griot / Jarvis were programmed to protect them from any possible injury, the AIs would lock them out of their workshops.

      • I wholly agree. The moral hazard encouragement is a problem with all safeguards. And the remote driving is already steeped in it. I agree, a smart AGI should dole out meaningful consequences to help avoid it.

        I’m less worried about protective = smothering. Each seems reasonably subservient and with a rich theory of mind, that proscribing something so fundamental to these characters seems out of line.

  2. Pingback: Agent Ross’ remote piloting | Sci-fi interfaces

Leave a Reply