Error does not compute

Error 0101010101010101.

1 Like

Your fats are way off.

1st off, it’s “Ulmer” with a U, not Olmer.

2nd:

Nobel Prize ceremony took place in 1970. Olof Palme was killed February 28th, 1986. And Berlin wall fell November 10th, 1989, not “shortly after”. A LOT of things happened between 1986 and 1989.

3rd:

From where this comes from? Ulmer clearly states in Audio Log #3 that:“The cytokine release that occurs seems to only be delayable, not preventable, which of course rules out any form of human application.”.

And the only reason why Ulmer agreed to be the 1st human test subject, not 10th, 13th, 50th etc is because he thought: “I would be the test subject for the invasive procedure. This… disease has already suppressed my immune system anyway, so it shouldn’t trigger a cytokine release.” and: “With human coordinated signal cases we could move ahead at a hundred times the speed we can with rats and monkeys.”.

4th:

There is no conformation about it in the game. Only thing that is said, is by Ingrid, where she states:“The first phase of the human experimentation on Dr. Von Ulmer - which I strongly opposed as you are aware - was initially successful, allowing us to start establishing a digital dictionary of human electric and chemical neural transmissions. However, it’s now been 17 days since the unexplained incident, and since then, we have but reason to believe that Dr. Von Ulmer has entered a vegetative state with no hope of recovery.”.

5th:

Holberg isn’t a spy in FOA. Holberg runs the place. Holberg’s Character Biography clearly states that since 1989, he is head of FOA and he directly answers to Supreme Commander of the Swedish Armed Forces, not to the Prime Minister. Prime Minister is a political figure, Supreme Commander is a general, there is a big difference.

Do get your facts straight.
If you need help, you can read most of the story from my reply here: Let's talk alpine unrest

1 Like

Your spellling is off (as is mine deliberately so you can correct me back).

It is “facts” with a “c”, not “fats”.

:smile::hugs:

I appreciate your insight into the lore of GZ, @Aesyle, and your generosity in sharing that with us. I must admit that I have a hard time following the background story as it is dispensed as tiny parts of a puzzle without chronological order and often very subtle or indirectly. And as this information is made available drop wise over the many months you play the game, you basically have to take notes and make a record to fit the pieces together. I can’t muster than kind of discipline for this game. But I really appreciate that you can. :+1:

That is a typo. Happens with humans. Also, the longer the reply - the higher of a chance for it to happen.

As far as the story in GZ goes, sure, you can take notes. Or you can memorize it if you have good memory. Also, you can go over missions and it’s items from Log menu at any point of time to re-read them.

… and I was only teasing you, obviously :wink:

Of course you are right about notes, memorizing stuff, and rereading the log. But I just don’t get to do it. So my posting was basically just a “thank you, please continue to enlighten us”.

1 Like

I thought there was more in the game about Von Ulmer uploading his consciousness into the FNIX system. Can’t remember though, I am more like Ian, obviously. :wink:

I think there is more because from the passage you quoted, @Aesyle, I would never have come to that theory.

1 Like

In Behind the Curtain we find Veronika’s floppy disk.
"Random notes #147

  • Finally got my OS up on the Gray…
  • First trial with neural data compilation tomorrow. Exciting!"

The neural data is likely von Ulmers. The OS is FNIX.

2 Likes

The Gray, lol. :smile:

1 Like

The Gray (Cray) is a vector processor, so I suppose it would be suited for old-school neural network simulations (it sucks at scalar code, though). I have no idea about Generic Algorithms and if it is suited for parallel execution. That is @Flick’s field.

1 Like

@IanForce Generic or Genetic?

Genetic. There is a lot of typos going around these days :wink:

I wasn’t sure, sorry.

Genetic Algorithms aren’t really likely to have been employed in the SM project, or for building FNIX. I could imagine some Simulated Annealing perhaps, that’s similar. Back in the day we used to write parallel sorting algorithms, and they’re not that dissimilar to the GA model. (One way to look at AI is that it’s all just searching or sorting.)

Anything you can represent with high granularity is a good candidate for parallelism, like scoring fitness on ‘genes’ or processing a neural net. The Gray as a vector processor is a decent choice, I think (without deep reading).

Funnily enough, I once tried designing a robot apocalypse game involving the central robot AI (the skynet) using GA’s to develop new machines ad hoc for the players. I hoped for a good evolving AI meta-battle. However, making a fun GA was pretty hard. Making un-fun murder bots wasn’t that difficult. Let’s hope FNIX never cottons on to the GA idea.

3 Likes

That computes.

There is a psychological theory that fun (also humor, as in jokes) emerges when you find systematics in your environment, when you learn something. An extreme example: Good jokes are overpowered, exaggerated instances of insights - they usually come with a sudden turn, where you finally understand what things were actually about. The joke’s punch line as an artificially over-increased moment of insight, the (unexplainable, psychologically/evolutionary i.e.) laughing/enjoyment as the exaggerated joy you experience when learning a systematic aspect of your surroundings. That was the first time I heard a theory about why “jokes are funny” (even explaining what “funny” means at the same time). And the theory even made sense. Joy as a response to insight makes sense from a reward perspective. And once such a system is present (in the brain, or the human mind) it can be abused by artificial over-fitting stimuly (jokes).

Following this, enjoyment is based on a underlying systematic, on information. And information is the opposite of entropy. Hard to create, or keep up.

Whoops, I must have had a seizure catapulting me back to my scientist days. lol

2 Likes

I think you and I are swapping careers and heading in opposite directions, Peg. I enjoy reading your insights into these things.
There’s a lovely theory that consciousness evolved as an entropy filter so the brain doesn’t get so overwhelmed by everything it has to process. This ties in with learning and pattern matching, and fits in with your observations above.

2 Likes

I once was good at talking in vast length about a plethora of stuff. Mind and brain stuff. It seems far away, but sometimes something creeps back up memory drain.

Possible. At least consciousness does not “see” everything that happens in the mind. Otherwise… total break down.

My personal theory is that consciousness evolved as we gained the ability to cognize about past and future experiences. There are many simple organisms that just function, their nervous system reponds (in complicated ways, and learning takes place, but the relevant input is the external stimulation). We, however, can un-focus from the present and remember past situations or extrapolate future ones. And we can set “ourselves” in relation to these. We are able to create internal representations of different situations, and also of ourselve as the reacting part. We can create a model of ourselves, reduced and idealized, that is separate from our actual complete being, and set this model in relation to things that we made up ourselves. I think that is a major basis of consciousness to be able to separate ourselves - in cognition - from ourselves. That makes it possible to “look at us”, to think about ourselves, and maybe to direct attention to certain aspects of a situation based on biographic memory. Because you can only “see” something that is different from something else (we can actually only see differences to things; in the visual system, e.g., if you hold the input absolutely constant, your vision will get blurry until you can’t see anything anymore).

Long story short: I think the ability to create a (reduced, simplified) model representation of ourselves, for simulation and observing/attention purposes, is the basis of, or is, consciousness. Which does not explain why the phenomenal part is there (there is an instances, which is “me”, that feels/sees, observes in an immediate/feeling/touching way…), but that is a different story.

2 Likes

It’s not just us, Peg. There’s a paper (I think this one) suggesting rats are capable of modelling their future performance of tasks before attempting them.

1 Like

No, it is not just us. I belive there are many complex animals including the human species having a consciousness (see the rouge test; although personally I think that test is flawed). And having feelings as well.

But there are also a lot of worms in the ground with very simple nervous systems. Of some of them we have counted the nerve cells up to the last one. They are not that many as I would expect you need for creating complex models about yourself.

We are kind of off-topic, btw. :smile:

2 Likes

If the rouge test is about wiping things on animals and showing them a mirror, the flaw is that not all animals are groomers, for a start. Some animals don’t care.

The topic was essentially dead after op got the news from me on discord and Aesyle here.

2 Likes

That is a good point.

My counter argument is that you don’t need to have consciousness for that to work. There are reflecting surface in nature (water). And removing certain objects from your body might be the result of a learning process, be it an onto- or phylogenetic one.

2 Likes

Yeah, you two, it is all very nice and scientific, but we all know that conscious is the soul :wink: