Error 0101010101010101.
Your fats are way off.
1st off, itâs âUlmerâ with a U, not Olmer.
2nd:
Nobel Prize ceremony took place in 1970. Olof Palme was killed February 28th, 1986. And Berlin wall fell November 10th, 1989, not âshortly afterâ. A LOT of things happened between 1986 and 1989.
3rd:
From where this comes from? Ulmer clearly states in Audio Log #3 that:âThe cytokine release that occurs seems to only be delayable, not preventable, which of course rules out any form of human application.â.
And the only reason why Ulmer agreed to be the 1st human test subject, not 10th, 13th, 50th etc is because he thought: âI would be the test subject for the invasive procedure. This⌠disease has already suppressed my immune system anyway, so it shouldnât trigger a cytokine release.â and: âWith human coordinated signal cases we could move ahead at a hundred times the speed we can with rats and monkeys.â.
4th:
There is no conformation about it in the game. Only thing that is said, is by Ingrid, where she states:âThe first phase of the human experimentation on Dr. Von Ulmer - which I strongly opposed as you are aware - was initially successful, allowing us to start establishing a digital dictionary of human electric and chemical neural transmissions. However, itâs now been 17 days since the unexplained incident, and since then, we have but reason to believe that Dr. Von Ulmer has entered a vegetative state with no hope of recovery.â.
5th:
Holberg isnât a spy in FOA. Holberg runs the place. Holbergâs Character Biography clearly states that since 1989, he is head of FOA and he directly answers to Supreme Commander of the Swedish Armed Forces, not to the Prime Minister. Prime Minister is a political figure, Supreme Commander is a general, there is a big difference.
Do get your facts straight.
If you need help, you can read most of the story from my reply here: Let's talk alpine unrest
Your spellling is off (as is mine deliberately so you can correct me back).
It is âfactsâ with a âcâ, not âfatsâ.
I appreciate your insight into the lore of GZ, @Aesyle, and your generosity in sharing that with us. I must admit that I have a hard time following the background story as it is dispensed as tiny parts of a puzzle without chronological order and often very subtle or indirectly. And as this information is made available drop wise over the many months you play the game, you basically have to take notes and make a record to fit the pieces together. I canât muster than kind of discipline for this game. But I really appreciate that you can.
That is a typo. Happens with humans. Also, the longer the reply - the higher of a chance for it to happen.
As far as the story in GZ goes, sure, you can take notes. Or you can memorize it if you have good memory. Also, you can go over missions and itâs items from Log menu at any point of time to re-read them.
⌠and I was only teasing you, obviously
Of course you are right about notes, memorizing stuff, and rereading the log. But I just donât get to do it. So my posting was basically just a âthank you, please continue to enlighten usâ.
I thought there was more in the game about Von Ulmer uploading his consciousness into the FNIX system. Canât remember though, I am more like Ian, obviously.
I think there is more because from the passage you quoted, @Aesyle, I would never have come to that theory.
In Behind the Curtain we find Veronikaâs floppy disk.
"Random notes #147
- Finally got my OS up on the GrayâŚ
- First trial with neural data compilation tomorrow. Exciting!"
The neural data is likely von Ulmers. The OS is FNIX.
The Gray, lol.
The Gray (Cray) is a vector processor, so I suppose it would be suited for old-school neural network simulations (it sucks at scalar code, though). I have no idea about Generic Algorithms and if it is suited for parallel execution. That is @Flickâs field.
Genetic. There is a lot of typos going around these days
I wasnât sure, sorry.
Genetic Algorithms arenât really likely to have been employed in the SM project, or for building FNIX. I could imagine some Simulated Annealing perhaps, thatâs similar. Back in the day we used to write parallel sorting algorithms, and theyâre not that dissimilar to the GA model. (One way to look at AI is that itâs all just searching or sorting.)
Anything you can represent with high granularity is a good candidate for parallelism, like scoring fitness on âgenesâ or processing a neural net. The Gray as a vector processor is a decent choice, I think (without deep reading).
Funnily enough, I once tried designing a robot apocalypse game involving the central robot AI (the skynet) using GAâs to develop new machines ad hoc for the players. I hoped for a good evolving AI meta-battle. However, making a fun GA was pretty hard. Making un-fun murder bots wasnât that difficult. Letâs hope FNIX never cottons on to the GA idea.
That computes.
There is a psychological theory that fun (also humor, as in jokes) emerges when you find systematics in your environment, when you learn something. An extreme example: Good jokes are overpowered, exaggerated instances of insights - they usually come with a sudden turn, where you finally understand what things were actually about. The jokeâs punch line as an artificially over-increased moment of insight, the (unexplainable, psychologically/evolutionary i.e.) laughing/enjoyment as the exaggerated joy you experience when learning a systematic aspect of your surroundings. That was the first time I heard a theory about why âjokes are funnyâ (even explaining what âfunnyâ means at the same time). And the theory even made sense. Joy as a response to insight makes sense from a reward perspective. And once such a system is present (in the brain, or the human mind) it can be abused by artificial over-fitting stimuly (jokes).
Following this, enjoyment is based on a underlying systematic, on information. And information is the opposite of entropy. Hard to create, or keep up.
Whoops, I must have had a seizure catapulting me back to my scientist days. lol
I think you and I are swapping careers and heading in opposite directions, Peg. I enjoy reading your insights into these things.
Thereâs a lovely theory that consciousness evolved as an entropy filter so the brain doesnât get so overwhelmed by everything it has to process. This ties in with learning and pattern matching, and fits in with your observations above.
I once was good at talking in vast length about a plethora of stuff. Mind and brain stuff. It seems far away, but sometimes something creeps back up memory drain.
Possible. At least consciousness does not âseeâ everything that happens in the mind. Otherwise⌠total break down.
My personal theory is that consciousness evolved as we gained the ability to cognize about past and future experiences. There are many simple organisms that just function, their nervous system reponds (in complicated ways, and learning takes place, but the relevant input is the external stimulation). We, however, can un-focus from the present and remember past situations or extrapolate future ones. And we can set âourselvesâ in relation to these. We are able to create internal representations of different situations, and also of ourselve as the reacting part. We can create a model of ourselves, reduced and idealized, that is separate from our actual complete being, and set this model in relation to things that we made up ourselves. I think that is a major basis of consciousness to be able to separate ourselves - in cognition - from ourselves. That makes it possible to âlook at usâ, to think about ourselves, and maybe to direct attention to certain aspects of a situation based on biographic memory. Because you can only âseeâ something that is different from something else (we can actually only see differences to things; in the visual system, e.g., if you hold the input absolutely constant, your vision will get blurry until you canât see anything anymore).
Long story short: I think the ability to create a (reduced, simplified) model representation of ourselves, for simulation and observing/attention purposes, is the basis of, or is, consciousness. Which does not explain why the phenomenal part is there (there is an instances, which is âmeâ, that feels/sees, observes in an immediate/feeling/touching wayâŚ), but that is a different story.
Itâs not just us, Peg. Thereâs a paper (I think this one) suggesting rats are capable of modelling their future performance of tasks before attempting them.
No, it is not just us. I belive there are many complex animals including the human species having a consciousness (see the rouge test; although personally I think that test is flawed). And having feelings as well.
But there are also a lot of worms in the ground with very simple nervous systems. Of some of them we have counted the nerve cells up to the last one. They are not that many as I would expect you need for creating complex models about yourself.
We are kind of off-topic, btw.
If the rouge test is about wiping things on animals and showing them a mirror, the flaw is that not all animals are groomers, for a start. Some animals donât care.
The topic was essentially dead after op got the news from me on discord and Aesyle here.
That is a good point.
My counter argument is that you donât need to have consciousness for that to work. There are reflecting surface in nature (water). And removing certain objects from your body might be the result of a learning process, be it an onto- or phylogenetic one.
Yeah, you two, it is all very nice and scientific, but we all know that conscious is the soul