itmeJP Community


[FAR VERONA // S02E07 // Q&A] Interstellar Vignettes

far-verona
q-and-a

(AdamKoebel) #1

all aboard the PROTECT YA NECK


(Marachime) #2

this episode was so satisfying :heart_eyes:


(apepi) #3

Wheat did inspire me to pay female characters, and they are honestly the most fun characters that I have played. It let me learn that it is okay with roleplaying something that you are not, and sometimes roleplaying something that you are not is better than roleplaying things that you are. It gets you out your own headspace and into someone else’s, and can teach you how to be more empathic. Or be able to connect with yourself that you never would normally be able to do.

I have now played a crazy changeling who never had their own identity, A strong female character who was willing to sacrifice people so that the rest of the tribe survive, and an engineer who ended up making situations always worse.

Watch Wheat play has made me a better player, and even perhaps, a better person.


(Womprats) #4

Hey @AdamKoebel , have you done any reading into any machine learning material? I’ve been thinking about how even the high-level concepts behind something like an artificial neural network (ANN) can be applied to fictional depictions of robot people / synths / etc. in interesting ways.

One thought that I’ve had is that the ANN technique relies upon a huge number of iterations to reach a completed / (near-)optimal state. For example, if you had an ANN that played tetris, its learning would involve playing a crazy amount of times using reinforcement learning. The only thing the ANN knows at the start is what valid inputs are and that points are good (the desired output) , so it’d start by trying completely random stuff and essentially take notes over time regarding what works and what doesn’t. By weighting it’s decision making towards the moves that yield the most points, it’ll get better and better at getting points as it continually experiments. However, the exact “why” of it’s decision making is effectively impossible to entirely resolve, as it is buried under a mountain of calculations as an input is processed by a ton of nodes/neurons to arrive at a conclusion. Applied to a synth, you get something like having difficulty explaining in words the reasoning behind an action, since that math doesn’t easily translate in to something like the “feelings” that motivated it.

A synth would obviously have a super robust ANN or whatever other model their brain uses in order to deal with the world around them, but you could still imagine there being scenarios they might run into that are just so unique and outside their understanding-of-the-world that the way they respond could be super bizarre. Again, due to the calculations behind the decision making being so complex, a totally new and unexpected input could yield entirely unexpected results (from a human perspective. To the synth, it might still seem “logical”, until they receive feedback of some kind informing it as being a “bad” decision). Almost like an alternative to the classic “ERROR ERROR WHAT IS LOVE CAN’T COMPUTE SELF-DESTRUCTING” style response.

I’m also thinking about how a synth would respond to feedback that is entirely counter to its training. For example, rather than being hard-coded to respond to big strong men a certain way, Christine might have been trained with a result of “make big strong man happy” or whatever as the desired output (probably with guidance in order to yield the specific personality her creator’s wanted). So then if Johnny eventually blows up on her, that’d cause a lot of cool questions about how her model would go about adapting to this new feedback telling her that her behavior does not make the big strong man happy after all. How much data would be needed to get her to change? Just one event probably wouldn’t cause much change, since her model has been reinforced so much to lead to her personality. How would she go about finding new ways to make the big man happy? The methods might be very non-obvious and strange at first until a new successful strategy is reinforced enough times. Or maybe, what if she IS hard-coded to respond to the big man the way she does, but she ALSO has her model trained with the desired outcome of “make big strong man happy”? This could mean that the “protect me Johnny” schtick would be impossible to turn off (though possible to re-code! Since a change to a coded sub-routine is possible to be altered by a human), but she might simultaneously display a different learned behavior which could lead to an interesting split-personality scenario where she might rapidly alternate between the forced “protect me Johnny” response and a different response (maybe one apologizing for bugging Johnny or something) being put forward by her brain model.

I’m sure there are a ton of other neat ideas for fictional applications of ML concepts out there!


(thatshawwwn) #5

the real question i have is when will Haley be told to watch her step, kid? Maybe i missed it?

jokes aside though loving where this is going, cant wait to watch whats next unfold