Observations Archives - Page 3 of 5 - Wootcrisp
 

Category: Observations

A small success story for my week has just wrapped up. There is a large, unpopular, academic publishing company named “Elsevier” that, up until now, I have had to accept as an intimate part of life. Several years ago they acquired the pdf organizing app I was using called “Mendeley“. Mendeley was fantastic really. It did everything you wanted it to do, like: rename pdf file names to a standard format like “<author>_<date>.pdf”, store them where you want them stored, automatically fill-out metadata by extracting the article doi, use browser-like tabs for the pdfs currently open, set the proportion of screen associated with an article versus its metadata, export pdfs with standard pdf annotations, easily switch citation styles for bibliography items, not cost anything, shrug it off when you go over your 2gb account storage, and have a basic phone app that syncs and reads the stored pdfs. So when I read that Elsevier had bought Mendeley, I had a good idea of what I wanted when I looked into alternatives: in particular,  the “open” alternatives Zotero and Calibre. What I quickly saw with them, was that they were indeed promising efforts, but they were certainly not the polished F-35 of Mendeley.

First off, both of these programs made you use the system default program for opening pdfs: that is a bad sign. It means there is a disconnect between the program and how the program is being used: like, you’re not going to be searching your pdf comments using the pdf organizer’s search bar, never mind all the other ways a pdf might be annotated by a separate program that effectively becomes a second life for that pdf as there’s no guarantee your pdf organizer will understand or act on these annotations. But, with the use of some plugins and a bunch of reading of the Calibre and Zotero manuals, I could see that one day I would be back to use them for real.

I shrewdly put Calibre on “book pdf duty” since books were too large to keep in the Mendeley library. The basic functionality for organizing the library was there, and the <epub/mobi/pdf> conversion plugin always did me right whenever I would have to immediately kill an “epub” or “mobi” by converting it to pdf.  However, it was absolutely out of contention as my daily driver for journal articles simply based on the fact that doi lookup based file renaming didn’t work out of the box, making new additions to the the library look like a list of “Unknown” and “<!DOWNLOADED FROM EBOK>” titles. Zotero was similarly advertised as an open project, but whatever it may have done better than Calibre, the account you make for syncing was limited to something like 300mb. As it was a problem to configure around this right out of the box I put it aside, but I did decide to keep it, as though it may have felt like riding a wildebeest at the time, it also felt closer to one day learning from Mendeley’s panther style.

Then a few months ago, my Mendeley Android app notified me that it was being discontinued soon, and then it was discontinued and would no longer open. Just like that. I spent years with this app.

Fast forward to this week, and I have now spent months living with the consequences of this amputation. I attempted to put the Calibre database in a Nextcloud folder to sync with my phone, but the weirdness of syncing the database folder while another device is also synced and might also be doing something else to that file was ultimately an insurmountable hurdle for the few hours I had to figure it out.  Was the Calibre Android app really even a good replacement for the Mendeley Android app? A recent sense of urgency pushed me to look into the existence of self-hosting packages for Zotero that might work around the 300mb limitation. There were in fact several options I could see, like a Docker server or one that runs an npm server on port 8081, but as you can imagine I was stoked for neither of those options. And then I found the just released “Zotero beta“, which allows you to sync files privately without storage limits using “webdav” which my Nextcloud server is able to use. So I imported my Mendeley library, synced it to my phone, and in mere minutes Elsevier had finished expelling me as a user.

 

 

The faster I run, the faster my to-do list grows, and the faster I see I must run. Every day I swat angrily at the varyingly important tasks that past me has given to present me.

The task list is in chronological order, chunked by day, using a “*” prefix to indicate complete. Had I proper foresight I tell myself, I would have at least used Markdown when I started it, where instead of a * I would have used “* [ ]” and “* [x]” to prefix each line. There are legitimate pros and cons to consider using Markdown however. In this case the most practical benefits of it are that you can denote headings with # or ## or ###, and a strike-through with ~~[item]~~ but you can also make basic tables and embed images the same way you would with html. However, that latter stuff is a little too fine-dining for a personal to-do list in my opinion, as time is really of the essence.

If you’re needing a professional looking to-do list for work or something, then the trade-offs are slightly different. Pure Markdown has some issues. First, you’ll notice that you need to put two spaces at the end of each line for a line break. That’s not very “time is of the essence” in my opinion. And then you will note that rendering Markdown requires a text editor that can do that, and I will not seriously suggest using pandoc terminal commands for a simple to do list, and for a more exaggerated form of that same reason I do not recommend reStructured text for a to-do list. So for text editing I recommend Atom  as it has quite a buffet of packages that are easy to install and allow you to do things like mix Markdown with Latex. While this can tempt you into breaking convention—and therefore portability—because of the idiosyncratic set of packages that will be needed to properly interpret your document, it is easy to imagine recovering that portability with a series of package installation calls at the start of your document like so:

apm install language-latex
apm install language-markdown

I should also note that QOwnNotes  has treated me well for many years as it is very specifically designed to sync with Owncloud/Nextcloud and I’m hesitant to uproot that system for something like “atom-ownsync” which doesn’t seem to get updated very often.

 

It’s about that time of year where I check in with my mirror dance progress by dancing to the same music that was in my first videos.

2020 original (bad audio): 

YouTube player

2020 director’s cut: 

YouTube player

Last year’s: https://wootcrisp.com/2019/03/05/three-year-mirror-dance-progress/

At first I wasn’t sure what to think about this dance, but now that I’ve just given these videos my undivided attention, while sitting comfortably in the bath with my new bath shelf, and using some nice new headphones I was given for Christmas, I will indeed say that that the dance has its quality moments.

YouTube player

Also, in exchange for a free battery, I will also say that the particular Bomaker “CA-Dolphin Ⅰ” headphones I just watched the video with were worth doing business with, given that I don’t know anything about how they were made. I really would like to believe the company is wholesome, but in this matter I haven’t got a clue as to how I could actually learn something about the company “Bomaker“.

This kind of endorsement uncertainty reminds me of a website I developed about a decade ago: GreenAds.org. The success of a product submitted to this site was based on a combination vote_count and a green_score to rank products similar to something like Reddit. Individuals were incentivized to submit good products because they could include their affiliate id in the url, and publishers could serve a feed of popular items in their banners, swapping in their own affiliate codes in the product feed:

I recently managed to misplace all the code from GreenAds but that’s okay since it was garbage code anyway [edit: I shouldn’t say it was garbage. It was done appropriately as far as I could tell, just using an older php framework called “smarty templates” which is too much for me to think about, but I love the look of their website]. The project that motivated GreenAds was Vidipedia.org, and there’s a bit more information about both projects at this Patreon I’ve been trying to work on when I have time: https://www.patreon.com/vidipedia

YouTube player

One of the ways I’ve been procrastinating recently is by thinking half-heartedly about Rubik’s cubes. I’ve never solved one, and don’t want to learn the algorithms, so I slog through some trial and error with it every now and then. To inspire a recent attempt, a few months ago I opened up the chapters on “Cubology” in “Metamagical Themas“, skimmed through some of it, then put on the Go Pro, and gave a newly acquired 2x2x2 cube the old college try. In what seemed like the blink of an eye the Go Pro was out of battery and I was sitting on near 18GB of boring footage from several attempts.

This scared me enough to not try again for a few months, but my mind has returned to the topic twice since then, and it recently led to a possible coincidence.

The first time my mind returned to the Rubik’s cube is an aside from this story, but I think worth mentioning and should be of interest to the general reader. I was trying to think of a more dynamical way to connect a set of neural fields such that each field doesn’t have to be fully connected to each of the other fields in order for them to be jointly capable of representing their entire space of combinatorial possibilities. The “dynamic neural field theory” (DNFT) approach to cognitive representation can already incorporate something like this in two ways. Multidimensional representations are typically decomposed into a set of fields that maintain bidirectional projections with each other through a common field. This is very useful because neural fields with more than 2 dimensions are challenging to simulate ((Possibly the only “integral” 3-dimensional construct that we have is “colour”, according to \textcite{Shepard2001}: “We may need three dimensions of color not because the surfaces of objects vary in just three dimensions but because we must compensate for the three degrees of freedom of natural lighting in order to see a given surface as having the same intrinsic color regardless of that illumination”.  It’s a very complex issue, but I often return to the 4-layer colour encoder-decoder network reported in \textcite{Lehky1999} when I’m trying to think about modeling representations of colour: “A particular ratio of activities in the population of wavelength tuning curves is assigned the label “white,” and the distribution of wavelengths that caused it does not matter. The set of all labels forms a qualia space. In this way the system avoids dealing with a difficult inverse problem and instead does something simple but perhaps behaviorally useful. Information is lost in this process, but the information that remains appears useful enough that there are evolutionary advantages to developing such systems.”)). In this talk by Sebastian Schneegans we can see  some methods to bind representations across 3 feature dimensions, like, for example, with 3 separate 1-dimensional fields, and a single 2-dimensional field, similar to what’s happening here:

YouTube player

 

In combination with a kind of “use-it-or-lose-it” strengthening and weakening of the connections between these fields—“competitive learning”, “sparse coding”—the computational burden is lessened dramatically. But this is not quite what I had in mind when I was thinking about Rubik’s cubes. Rather, I wanted some way to go from a representational space with a fixed frame of reference for each field, like small-medium-large, to a relative frame of reference like small_fruit-medium_fruit-large_fruit, via a sequence of discrete operations on just the actual set of fields themselves. Naturally, this made me think of the Rubik’s cube again, with the faces being a set of 6, 2-dimensional fields. I think it could be useful to think of each “facelet” location on a face of the cube as having a receptive field centered on a “preferred” distance for the transforms that move that facelet to a location on another face. The decision to rotate the cube in a particular way, would originate from integrating activity for each location, given that location’s rotation distance “preferences”/receptive-field. It’s weird to think of Rubik’s cubes solutions as requiring decision procedures based on this. Now before you say “it’s not weird that the Rubik’s cube solver would need a decision procedure using information like that, and it’s irrelevant to what a Rubik’s cube metaphysically is”, let me discuss the next time I thought of the cubes.

The second time I thought of the cubes since my failed attempt at solving the 2x2x2, was this evening, as I tried to wrestle my mind into focusing on something properly erudite. I opened Metamagical Themas once again and began reading “Magic Cubology” in earnest. In particular I was looking for the page containing that most interesting of anecdotes about Rubik’s Cubes that I had never forgotten: Rubik’s cubes are thought to be a model of quarks. I wanted to photograph this page for a new collection of curiosities that I’m putting together, possibly as a daily picture site.

It was just at this point, that my girlfriend came in the room and asked if I wanted to finish watching “Planes, Trains, and Automobiles” with Steve Martin and John Candy. This is where the coincidence happens. A few minutes into the movie, my mind conjured up an old picture of Steve Martin holding “Metamagical Themas”:

 

What are the chances? We had been watching the movie a bit last night, so Steve Martin likely activated the idea I have of him as being an erudite celebrity, and I think he does look a bit like Douglas Hofstadter, but…Metamagical Themas was also laying next to my bed from where we watched the movie so that might have suggested the movie in conjunction with Thanksgiving time. Alternatively, I had been humming and hawing over whether to make a “daily curiosity” site or just a montage of curiosities as part of an art piece in an upcoming gallery I’d like to release in time for Christmas. This humming and hawing may have provoked me into opening the book this evening, and the memory of Steve Martin’s picture may be a genuine coincidence.

 

A moment ago I found myself trying to explain why “Graham’s number” was more exciting to learn about than what it sounds like, i.e. it’s “just” a really big number. No no. This number expands your notion of what “big” actually means, because you need to learn about the operation that succeeds exponentiation in order to understand it: “tetration”. This Numberphile does a great job at explaining it.

 

YouTube player

 

As I was trying to explain the excitement of this number it occurred to me that I better understood how Douglas Hofstadter would remember his childhood feeling of disappointment in learning that a subscript like n in x_n, does not refer to something as interesting as exponentiation, like x^n.

 

YouTube player

 

So what is the hyperoperation (x[5]r) beyond tetration (x[4]r)? It is called “pentation”:

 

 

https://en.wikipedia.org/wiki/Pentation

There is something incredibly satisfying about beating a corrupt system. I am currently posting this from behind a pay-for-use gateway at the Mexico City airport, connected to the internet using a DNS tunnel provided by an Android app called Slow DNS. All around the airport are signs saying “free wifi”, it just happens that this is a lie. There is no airport wifi, but there is “free wifi” available if you have an account with a telecom provider like Telcel. I do have an account with Telcel but was simply unable to have my credentials authorized. This is extra annoying because, 1) I ran out of data a few hours ago and, 2) I had to go to the airport early.

Frustrated, I tried to buy a drink at a bakery offering “free wifi” to customers, but was told that a drink would not be sufficient to elicit the password, and I certainly wasn’t going to pay Vancouver prices for a croissant. Fuming mad, I decided to try and use the Slow DNS app I downloaded last week as a stop gap measure until I have the time to get iodine installed on one of my servers. To my total shock it is actually working.

What is a DNS tunnel? For the lay person, it is sufficient to say that it is a way of stuffing normal internet traffic into the much more restricted little internet that allows you to access corporate pay-for-use gateway pages. More advanced readers might want to read a more detailed explanation here.

I have been looking for an excuse to do some DNS tunneling for almost 20 years and finally, finally, I have the tools and moral righteousness to do it.

It feels so good. You simply must try it for yourself. Just make sure the websites you visit are “https” and not “http”. I have no idea who runs Slow DNS and it’s best to assume you’re being watched.

Of the many important ideas that the renowned physicist Geoffrey West explains in his recent book “Scale“, several of his remarks about temperature stuck out as demanding more widespread attention:

 

I first heard West talk about these ideas in his interview on Sean Carroll’s Mindscape podcast. I must say that he came across as quite pompous in this interview, but it became clear over the course of the interview that his research was uniquely important.  In my view, his book is essential reading for any scientist.  

 

Obtaining EOS cryptocurrency has recently gotten a whole lot less brutal. See the guide below if you don’t care about my life story.

[optional backstory]

Earlier this year I did some consulting work that exposed me to two very promising seeming blockchainy projects: IOTA and EOS. I’ll describe the basic ideas for both, but I should say that my technical knowledge of these projects is at best superficial. In the case of IOTA, it turns out the “internet of things” is not the joke that a lot of people, including myself, had thought it to be. Having your fridge connected in an information network to your toaster actually does make sense in a lot of scenarios. Blockchains are not necessarily ideal for recording the micro-transactions that these kinds of connections might produce however, because the records pile up and aren’t very interesting. IOTA is a cryptocurrency that promises to deliver the useful elements of a blockchain (secure, decentralized) using a lightweight data structure called a “Tangle”:

The IOTA project appears to me to be quite serious, with major companies like Volkswagen and Fujitsu embracing the technology

The EOS project is just impressively ambitious. Using a technology called “delegated proof-of-stake” EOS appears to solve some of the problems encountered by the lovable proof-of-stake based Peercoin project, by being less extreme in its commitment to “decentralization”. Owning some EOS entitles you to vote for “block producers” who do all the hard work of verifying transactions, in addition to offering computing resources to their voting base. Only the top 21 vote receiving block producers are entitled to block rewards, which makes for a competitive dynamic that could feasibly retain the security of total decentralization, with some of the benefits of centralization (e.g., less wasteful mining). It has taken a jaw-dropping amount of thinking to pull this off. The company behind it all, block.one, raised billions of dollars that it now uses to support the development of the system. This includes a governance model that actually clearly defines how disputes and changes to the system can be resolved. In my view, the project makes my own forays into cryptocurrency voting ideas, specifically with votecoin.com, totally pointless.

There is much much more to say about EOS, but the point of providing all this backstory is to motivate this step-by-step guide. Trying to obtain EOS was a very trying experience. I literally failed to do so until just recently, and I would rank the difficulty almost as high as trying to get my nvidia GPU to play nice with Ubuntu. In terms of time spent failing and time spent ruminating on the issues, I’ve surely crossed the 100 hour mark since June. I am delighted that my favourite blockchain project of last year, Bancor, ended up being the saviour in this story.

[/optional backstory]


Step-by-step guide to obtaining EOS

*Assumes you already have some Ethereum. If not, it is easy enough to obtain, e.g., CoinSquare 

1. Install EOS Lynx on your phone. This costs money, but you need money in your EOS wallet to do anything.

2. Export your EOS Lynx keys to your computer.

3. Download and setup Scatter desktop and Scatter browser plugin and import EOS Lynx private key.

4. Make an account on bancor.network, and install MetaMask browser plugin. Follow the directions in this video EXACTLY. In summary:  A) Transfer Ethereum to BNT using MetaMask plugin on bancor.network (do no not use “Bancor wallet”), B) Transfer MetaMask Ethereum to MetaMask BNT using x.bancor.network, C) Transfer MetaMask BNT to EOS using eos.bancor.network 

YouTube player

You will not be able to buy EOS until you have enough EOS “staked” for CPU resources, which you can do using your Scatter desktop “Vault”. Great. So you can’t buy EOS until you have EOS. To solve this you’ll need to use “CPU Emergency“, fail, install Telegram, join the “CPU911” telegram group, and beg them to give your EOS account some CPU resources. Also note, that you require a balance in your ETH wallet on Bancor.network in order to process any transaction, even when not using that Ethereum balance in the transaction. 

5. You now have EOS, but you should also attach your Scatter wallet to EOS Toolkit to vote for some block producers. Voting is important. I have voted for LiquidEOS using their custom application because Bancor deserves it, but spread your votes out.