Wootcrisp – Page 2 – Looking into things


| Leave a comment


I have a real soft spot for the field of study called “Cybernetics”. I was reminded of this today by a popular Reddit thread linking a well-intentioned, but poorly executed, film called “The Choice is Ours“. I didn’t watch it all, and am instead using this an opportunity to link one of my favourite documentaries: “All Watched Over by Machines of Loving Grace – Episode 2 – The Use and Abuse of Vegetational Concepts“, written and directed by Adam Curtis. There are many many many fascinating things to learn from the Curtis documentary, and I would encourage absolutely everyone to watch it, but I’d like to add a few of my own thoughts about Cybernetics as well. 

In all my years at university I never once saw a course in Cybernetics offered. It wasn’t until I started trying to model human eye movements that I was able to develop a really deep appreciation for this field.  Some of the most important papers I read in graduate school were published in a relatively obscure journal called “Biological Cybernetics“. It was here for instance, that the enigmatic Shun-ichi Amari published his seminal paper that later became a cornerstone of the Dynamic Field Theory based modeling that is I do. Amari lit up my imagination with all kinds of curiously titled, yet often totally impenetrable, papers. One day I will read and try to understand The Information Geometry of Turbo Codes, but for now I will simply say that it is satisfying that some of my academic work has managed to intersect with another interest I’ve long had in self-organizing workplaces.

Cybernetics is very concerned with the concept of “feedback”. This is a topic serendipitously on my mind, as I’m editing some papers I wrote for my comprehensives, and am just at a section discussing the differences between “unsupervised” and “supervised” learning. All that really needs to be said about that, is that feedback can really change how we learn about things, for the better or worse, and some things cannot be learned without it. 

For many years my bathroom has been haunted by an old copy of Stafford Beer’s “The Brain of the Firm“. I never did finish this book, which details the “Viable system model” developed to manage the economic input and outputs for the tragically short-lived Allende government in Chile. This is work that is all but forgotten in the halls of academia, yet these are ideas whose time may have finally come, manifest as cryptocurrencies like IOTA. I have a similar affinity for the eccentric, but inspiring, Buckminster Fuller, to whom to we owe the popularity of geodesic domes. I do hope that Cybernetics rises again in academia.

After recently watching this video on “dark” design patterns, I thought I’d describe the experience of embedding YouTube playlists on Minds.com.  Notice that this playlist below works as it should: One video rolls into another on the Nature Walks playlist. Not so when I try and embed this playlist to my Wootcrisp page on Minds.com: What does it mean? Unless it’s an innocent programming issue, Minds and/or YouTube is doing this to incentivize their video platform. A cursory Google search turned up no obvious way to detect which of them was the problem, but testing the playlist here shows that it is at least possible.  edit:
  • Also, at the time of writing this, YouTube will only display 200 videos under a user’s upload playlist before requiring that you click the 200th video, and only 100 on any given playlist:
A year ago I compared my progress with Call of the Mirror Dance between two videos after a year of dancing. I judged the dancing to be no better or worse, just different:  This year I went with a shorter video than the previous years:

I find this video less inspired but of about the same technical level. 
Stellasers Isaac Arthur discussed “stellasers” in a recent episode called “Colonizing the sun”. These stellasers are large mirrors for trapping solar light and beaming the trapped light like a laser to push on spacecraft we might want to send abroad. Why not send information signals to stars that are likely to have planets? If there’s a better way to send VR surrogates across intergalactic space, I’m all ears. Maybe once having signalled these likely places you start to receive a whole bunch of new signals from closer stars that didn’t know you were in the game? Information could be multiplexed across solar relay points between our sun and more distant stars using a kind of population-coded routing system that differentially prioritizes components of the signals to be sent. We could for instance separately build a stellaser for blue/green/red wavelengths of light, that fires these colours into distinct relay pathways that bend toward your solar direction to better accommodate that wavelength of light from your direction. Actually, I’d like to know if separating bands of light increases each band’s probability of having some photons (noise around c) arrive faster than c. Red light travels faster than blue light in mediums other than the vacuum. I wonder if that could be built on?                      

  There’s some pretty obvious neural network analogies that come to mind here, specifically Hebb’s phase sequences.

Hebb, D. O. (1949). The Organization of Behavior. (Wiley, Ed.). New York, New York, USA.

  However, there is also an interesting role for neural agency in this interstellar communication analogy that is missing from Hebb’s descriptions. It would take the initiative of a society to realize communication with others like it, as it would take the initiative of a collection of neurons to internally route its component representations via its own population coded representations.
For Mac users, I’ve got a hotkey that’s worth sharing: command+shift+4. Years ago I set my print screen hotkey to fn+F12, which was basically fine for most things, though I would often have to crop the saved file because it grabbed the entire screen. This became much more irritating as the number of monitors I was using increased, as it would save a separate file for each display. As a stop gap I started using the grab application manually, which was annoying because I would 1. have to open that app, and 2. have to cmd+s and write a filename for the screenshot. This bugged the hell out of me and really made me rethink each screenshot I did. A few days ago I found command+shift+4 and I was almost reluctant to share it because it seemed so unfair to go through such agony for so long and give away the prize. But you see how nice I am? Also, the joint key press indicated by “+” symbol is how to think of unicode commands, like “U+052A” that produce “Ԫ”. But that’s a bit more involved. On Mac you must activate “unicode hex input” in your keyboard settings. This is a nuisance but it will open the wide world of unicode to you: https://unicode-table.com/en/#basic-latin First, to read it: when you see “U+052A = Ԫ” what it means is pressing option+[0→5→2→a]. If you’re using a Mac, notice that doesn’t work, and you instead end up with “º∞™å” in your text editor. You need to enable “unicode hex input” and switch to it. Here are pictures that cut to the chase of what is written here http://poynton.ca/notes/misc/mac-unicode-hex-input.html 1.  2. 3. Look at how nice it was to learn about arrow characters for the → characters above: 

Wootcrisp ©2019. All Rights Reserved.