So I ran across this podcast at the recommendation of my chemistry teacher at EvCC and thought you might want to give it a listen. It’s about the network of fungi that live in the ground underneath trees and form a communication network with the trees. The purpose of this network is to transfer and exchange nutrients but performs other functions and is really amazing to hear about!
The woman who discovered this network is Suzanne Simard and her story is actually pretty interesting. She was a forester in western Canada and was tasked with watching trees that had been planted to reclaim forests and monitor how they were doing. She noticed something funny going on.
[if a nearby birch tree was removed] The Douglas fir became diseased and, and died. There was some kind of benefit from the birch to the Fir. There was a healthier community when they were mixed, and I wanted to figure out why.
If you removed a nearby tree, you would find that another tree wouldn’t do so well. So she performed an experiment to find out why. She covered some small trees in plastic and injected radioactive gas into some of them and not others. When she came back with a geiger counter later and ran it up the side of the trees, she would find that the other trees had somehow absorbed the radioactive gas that the other trees took in. So it was obvious that something was going on.
So to summarize, she found out something that scientists had suspected for a long time but didn’t know for sure, that trees cohabitate and exchange nutrients with each other through this network of mycelium which allows them to also store nutrients (sugar) during hard times and retrieve it when its time to grow.
Additionally, the fungi and tree roots actually communicate with each other using chemical signals, according to the podcast. The fungi rely on the trees to produce sugar which they can do using photosynthesis, and the fungi can not. In exchange, the fungi break down minerals in the soil by excreting acids and “mining” the rock particles in the soil and send these minerals back up to the tree. Without the fungi, the trees would not be able to exist.
The trees can also use the network to tell nearby trees when a predator, like some kinds of beetles, are coming so that the trees can excrete nasty tasting chemicals that will repel the beetles.
This network and cohabitation is really amazing! Just goes to show that God made trees just as complicated as he made humans. At least that’s my interpretation.
One last quote:
that all these trees, all these trees that were of totally different species, were sharing their food underground. Like if you put a food into one tree over here, it would end up in another tree, maybe 30 feet away over there. And then a third tree over here. And then a fourth tree over there. And a fifth tree over there. Sixth, seventh, eighth, ninth, 10th, 11th, all in all turns out one tree was connected to 47 other trees all around it. It was like, it was like a huge network.
Back in the summer 2019, May specifically, I went on a boat trip with my Mom and Dad. I had a lot of time on my hands, and as boat trips can be kind of monotonous sometimes, I decided to tackle a project I had been hoping to do for a while: writing a chess program.
initial effort consisted of finding an article on how to write a
unsatisfied with just copying someone else. Nevertheless I got good
experience which I invested in my next iteration of the project which
was a chess program written from scratch, completely in C++.
Why C++? Isn’t writing in C++ a lot harder to do because of its strongly-typed nature and unforgiving compile-time errors? True, but I already had some experience in C++ and felt I could give it a go. I spent 85 hours from the 20th of May, 2019, to the 3rd of July, 2019, and was able to get it in a mostly finished state. However, at this point I ran into a bug I could not figure out which was with how the values were propagated through the tree (a common feature of the MiniMax algorithm), and so put the project aside until I had more experience to be able to finish it.
To talk about the details of how the program works, it is mostly based on a tutorial I found on move generation for a chess game written in C (C is the predecessor to C++ and is very similar) using a from scratch implementation of the Minimax algorithm. Most of my code consists of move generation and the Minimax function. I’m glad I found the tutorial on move generation for a chess game as it was a real life saver (I had no idea how to begin), but the real fun was in implementing the Minimax algorithm.
So how does the Minimax algorithm work? Well, most chess programs use a a version of the Minimax algorithm, with move generation, combined with special tricks to make their program a better than just basic chess program. The Minimax algorithm is basically an algorithm which generates a tree of all the possible moves the opposing player can make, up to a certain point, from a given starting point. All chess games only go to a certain number of levels and mine is no exception (though you can change it when you start the program). This is because you can’t generate too large of a list of possible moves or you’ll run out of time and memory.
By the way, an algorithm is a computational method or formula, but more specifically can be defined as the list of steps needed to arrive a certain result given a certain set of inputs. Algorithms also are used to solve a single class of problems. A algorithm is needed for just about any computation task, big or small.
The tree made by the Minimax algorithm takes time to construct and the deeper you make the tree (and hence harder to beat) the longer it takes to make the computation. My program is single-threaded and takes about 60 seconds to compute a 5-level tree (which is a decent number of levels), but I usually set it to 4 levels to give it a faster turnaround time for testing the program (closer to < 10 seconds). 5 levels means the program can see 5 moves ahead when making decisions on what its next move will be. This isn’t much, but it’s enough to beat most amateur players.
A Little Technical
does this tree consist of? Well say you have 20 possible moves for a
given game layout (the exact number of moves possible from the
starting layout in the game of chess is actually 20). The program has
to generate all the legal moves for that layout and then decide the
best move to make out of those moves. This is where the Minimax
algorithm comes in. In the Minimax algorithm, each move is given a
certain score, which is based on the values propagated up from the
lower level of the tree (assuming the first move is at the top). This
score actually comes from the lowest level of the tree where each
move (also called a node) is given a static evaluation based on a set
The biggest factor in this criteria is the sum of the value of the pieces on the board, but there can be many other factors as well depending on the game (Deep Blue used special end-game tricks, and used a massively parallel supercomputer to do its computations among other things). Basically, each piece on the board gets a value that is assigned to it and never changes throughout the game. Only when that piece is removed from the board does the sum of the value of the pieces on the board change. The Minimax algorithm (or at least the most simple implementation of it) is based on the static evaluation of layout of the pieces on the board at the bottom of the tree (Five moves ahead in this case). This gives you the score for that possibility.
the Minimax algorithm tries to see as many moves ahead in an effort
to defeat human intuition and our superior pattern matching skills
(given the neural-network-like nature of our brains). It uses brute
force, evaluating anywhere from thousands to millions of
possibilities each turn. My version set at level five only uses
hundreds of thousands of evaluations, but that’s a lot compared to
what the human brain can do which is only a few moves at a time and
just a few moves ahead (for your average Joe).
The Minimax algorithm uses a recursive implementation of a single function (in my case named miniMax()) to successively go deeper and deeper down the tree until it has gone as many levels as it needs to. Each time the function is called, it generates a list of moves and then calls itself on each of those 20+ moves (or nodes).
When it reaches the bottom of the tree (or goes down as far as I want it to), it calls another function which does a static evaluation on each of the nodes in the list of nodes (in my case this function is called staticEval()). This static evaluation returns a score for that move and passes it up to the Minimax function. The Minimax function takes the value of each of these static evaluations and either takes the maximum or minimum value, depending on whether the level it’s on is for black or white. Black is called the Minimizer, and White is called the Maximizer.
The reason black and white take the minimum and maximum is because the values of the pieces on the board (remember?) are assigned static values at the start of the game which range from some predetermined negative number (for black in my case) or some predetermined positive number (for white in my case). So say a queen for example is given the value 900 or -900, depending on which color it is. Other piece values range from 100 or -100 for a pawn, to 20000 or -20000 for the king (just an arbitrarily high value).
It’s All About The Score
So the reason the Minimax function takes the maximum or minimum value is because the minimizer (black) is always trying to get the lowest score (more of its pieces on the board), and the maximizer is trying to get a maximum score (the most of its pieces on the board). Remember, the cumulative score for any board position reflects who is in control of the board. In my case it directly relates to how many pieces are on the board, but that’s just because my program is a work in progress and I haven’t added any other factors to my evaluation yet. So the maximum score possible is actually the score for the safest move to take for the maximizer and the minimum score is the score for the safest move for the minimizer.
This score is passed up the tree until it reaches the top level which is where the list of the 20+ possible moves based on the original board position is. The Minimax function at this point simply takes the maximum or minimum depending on whose turn it is and chooses the best move based on the score.But In actuality it may need to make a random selection if many of the moves are equal.
Back to the Story
So about my program. It’s still a work in progress, but I recently got back to working on it this month and was able to make progress on fixing the bug, which I’ve isolated to being something about the way the score is returned by each successive iteration of the Minimax algorithm. You see, I don’t know how to return values for checkmate up the tree or what to do in the event of checkmate. Do you return the maximum or minimum value for your given data type? or do you just do a static evaluation on that node and not go any further down that branch of the tree?
So I’m working on figuring it out, but if you want to give me some advice or point me in the right direction I’d appreciate it! So I think I can figure it out soon, but I realize I need a better understanding of the Minimax algorithm or at least of how the results are passed up the tree in the event of checkmate.
That being said here are some screenshots from the program in its current state to give you an example of how well it does. I got move generation down and Minimax is mostly done, but we do have some odd behavior as you will see.
So this first picture is a game in progress. You can see I used a text-based interface as I wanted to get this up and running relatively quickly. I hadn’t yet implemented color coding for the pieces, so it’s a little confusing, but “black” at the top appears to be unwilling to moves it’s high-value pieces out into the danger zone. I think this might have to do with the number of levels I had the game set at (four as usual), because if it can’t see that far ahead, it might just play “safe”, and try to preserve the best score. But I think I need to add a little bit of bias mapping (based on how close a square is to the center of the board) to the static evaluation at the end of the game to make it work properly. Anyway, something is wrong with the way I’ve implemented it.
You can see in this next iteration of the game I had introduced color coding (very slight difference, but there nonetheless) which makes it a lot easier to see whose pieces are whose. You can see my white pawn is sitting there right in front of the black queen, and you will be surprised to hear the pawn was able to take the bishop to the left out as black just ignored him. Also, none of the pieces along the way from one side of the board to the other acted either, so I’m thinking there’s some reason the computer is not taking pieces it can, when it should.
So the king in this next game, in this later version of the program, was actually avoiding being captured as I had made a change to how the values were propagating up the tree. In this screenshot he had just avoided the white pawn right below him. What I changed was to return the max value for an int data type if the minimizer was checkmated, or the min value if the maximizer was. Makes sense, as the max/min value represents the worst score for the one being checkmated, but something’s wrong. The funny thing is, he would only avoid the danger if the danger was on the very next turn, which doesn’t work of course for a proper computer opponent.
So in this last slide of the game you can see I was able to easily checkmate the king by moving my queen in. So although the king would avoid check at the last moment, he would eventually lose because of his shortsighted behavior.
So I gained a lot of experience from writing this game, and I wouldn’t trade it for very much, but I do have a few things to learn. Namely, how to implement Minimax properly and handle checkmate in a logical way. If any of you reading this have an idea of how to help me, give me a comment down below!
So I created a survey to find out among other things, how well off photographers are, what their employment status is, whether they use vintage 35mm film cameras (could have thought of including other film formats), and how much they love photography (on a scale of 1 to 10). This survey was a bit of an experiment. You can fill it out below! By the way, you don’t have to give out your income if you don’t want to. It’s not required.
If you can, send the link to this page to as many friends who are photographers as possible so that I can have as much data as possible for my survey. I will post the results on this blog when they are complete. Thanks a bunch!
So I said I would write this review on this book I read, titled “Range: Why Generalists Triumph in a Specialized World”, and I figured it was time to get to it. This book is pretty amazing in my opinion and I feel I learned a lot from it. I will share a few things I learned in the following review.
The book starts by talking about the “Tiger” syndrome (named after Tiger Woods) where kids are introduced early to the career they are going to pursue for the rest of their life and get going pretty early on it. This does lead to better initial results with their pursuits and even some people, like Tiger Woods who is very successful, for instance. But the book goes on to argue that this may not be the best way to learn.
It turns out people who don’t specialize early on actually get a deeper understanding of the subject they’re studying and do better in the long term. Data backs this up. It is shown that people who have a “sampling period” and study many subjects will actually do better in the long run.
The book then goes on to explain “wicked” vs “kind” problems. “Wicked” problems are those that require deep and analytical thinking rather than relying on the intuition which we value so much. I think such problems may also represent the lions share of really rewarding work that we do. “Kind” problems are those which are easily solved early and are easy for us to grasp.
The book also goes on to talk about how extracurricular learning actually helps to solve problems in the specialized space and how specialized “tigers” actually consistently fail or take much longer to solve a difficult “wicked” problem than their more generalized peers. The idea is that information and experience from outside your profession can often be the key that unlocks a “wicked” problem.
One “wicked” problem of this nature given as an example was when they were trying to clean up the Alaskan Valdez oil spill. The problem was that the oil they were trying to clean up could not be pumped into the barges because it was so stiff and unworkable. It was described as “chocolate mousse”.
So they started a contest to find a way to fluidize the oil and make it pump-able and eventually some guy with no experience in the area found that by spinning rods stuck in the stiff unrefined oil it would heat up and fluidize and become easy to pump. This is a good example of a “wicked” problem that was solved by someone with outside experience.
I will end by giving one more example of what I read in the book and that was the example of using lateral thinking with “withered technology”. The example was of Gunpei Yokoi who was a engineering graduate who was hired by Nintendo before they made video games. He used his creative thinking to combine technology, that wasn’t that advanced or cutting edge, to create products that were a hit on the market and innovated in surprising ways.
Yokoi started with no big ambitions but eventually ended up transforming the company from a failing company into a global giant. He started by inventing things like genius RC cars that only turned right and eventually went on to invent the Nintendo Gameboy which was a revolutionary product at the time not just for its technology but for how that technology was combined in new and surprising ways. His was a particular kind of genius and he is an inspiration to how I might like to innovate in my areas of expertise. He is what you would call a “generalist” who combines multiple disciplines.
So overall I’m very grateful I read this book and am trying to apply to my life what I have learned from it. One way to do this is to not be afraid to study many subjects at the same time which is something I’ve worried about in the past but now feel much more comfortable with.
Tell me what you think of this review and any questions you may have about the book and I’d be happy to answer! And if you want to buy this book on Amazon, you can see it here.
I’ve recently been reading a book titled “Relativity – The Special and General Theory” which is the book Albert Einstein wrote on his two theories of relativity. So far I’ve covered the General Theory of Relativity and am just about to move on to the Special Theory of Relativity. In reading this book, it got me thinking about an old thought of mine which is about the problem of reconciling Special Relativity (or maybe it was General Relativity) and Quantum Mechanics.
You see, Classical motions of bodies (i.e. planets, humans, cars, balls, cats, you get the picture) can be described in a classical sense because the relativistic principles effect is so small that it literally can’t be measured (except when it comes to atoms and electrons and such things). So there is no problem describing relativistic systems in classical ways.
In a similar way, the theory of quantum mechanics describes things in a way that allows for interesting things to happen at very small scales but at larger scales the quantum effects can not be measured at all. Correct me if I’m wrong but I think I got this right. So both General Relativity and Quantum Mechanics have the same problem that made them so hard to discover in the first place and not very intuitive.
I’m not sure what my point is except that these theories of physics describe things that are not intuitive to the human brain or easy to find out but ultimately rule our universe. So even though sometimes things look bleak for physics or it looks like we haven’t made in breakthroughs in the realm of physics (i.e. a working theory of Quantum Gravity or Grand Unified Theory), eventually, if we stick to it, and keep exploring new ideas, we will find the answers.
So I don’t know if this helps you, but I find physics fascinating and would love to learn more (though not necessarily go through college for that) and find this very encouraging. If you found this interesting please let me know in the comments and I’ll be happy to talk about it! (I think my comments section is working now).
On a side note, I find Einstein makes a great author and would have probably made a great physics professor as well (He was appointed a professor of theoretical physics in Germany for a few years, a position made just for him). His manner is non-condescending and human and relatively easy to understand. Although I didn’t totally understand the equations for the Lorentz Transformation perhaps I will at some point in the future.
I look forward to reading the rest of this book and would recommend it to anyone interested in physics and mathematics!
By the way, I know I said I would write a summary of the last book I read, “Range: Why Generalists Triumph in a Specialized World” which I just finished, but I will get to that soon so stay tuned.
I’ve been thinking about quantum computers a bit and I think quantum computers could really change the world. The quantum computers of the future will make the supercomputers of today look like calculators. And a lot of people believe this. I think in this article I linked to it said a 300 qubit quantum computer would be equal to about 2*10^90 bits on a classical computer (that’s 2 with 90 zeros after it).
The reason this is so is because quantum computers harness the quantum nature of the atom in a way classical computers do not. Whereas classical computers calculate by simple bits, quantum computers use an entirely new thing called a qubit. Qubits aren’t really binary bits because they don’t represent a single on or off state but really a superposition of the two states that particular particle can be in and is a product of its wave function (the math behind this superposition). It’s really powerful if you think about it (and understand the physics behind it). Quantum computers basically rely on the computing power of the atom to make their calculations.
So though quantum computers will never replace classical computers because quantum computers are not good at most of what classical computers do, and though the quantum computers of today are not as powerful as we can envision they will get better and better till they surpass our expectations just like classical computers did.
The reason quantum computers have been held back is that the quantum qubits in quantum computers are very fragile. The reason for that is that when you interact with a qubit in any way a measurement is made and the superposition of the states of the qubit collapse to just 1 or 0. Destroying the quantum information. Scientists try to get around this problem by cooling their quantum computers to a fraction of a degree above absolute zero to isolate qubits from their environments so they don’t get jostled. Unfortunately they still have a problem getting those qubits to stay that way for more than a few tens of microseconds. But there advances being made (for example using graphene as a superconducting material).
For now quantum computers remain giant super-cooled machines in researchers laboratories but some day they may become as common as your cell phone. If you remember the same thing happened with computers (think of ENIAC) and I think the same thing will probably happen with quantum computers given enough time.
I’ve linked an article I read that goes into more detail on this. Read that if your more interested in learning more. Or just google “quantum computers”. There’s a whole world out there waiting for you to explore it.
As for me, I’m thinking of studying quantum mechanics (on my own) and maybe getting a book on it. I have a recommendation from somebody I found on YouTube that will probably be good. I’ve been reading articles on Quantum Mechanics and am fascinated by the science and would like to learn more.
I used Mastodon for a while but the server I was on did not have people interested in the same stuff I was so I quit. But I’m getting back into and I think it could be a good platform to build on. It allows you to host your own server and not be dependent on any one company or organization to host the servers. It also has some interesting features such as 500 character “toots” (similar to tweets), local timelines (where everybody sees what everybody else on the same server posts), federated timelines (kinda like group following), and lots of little features that add up to a better whole.
If you’re interested in joining a Mastodon “instance” you can do so at https://joinmastodon.org/. The thing is to remember joining is like getting an email address – you are stuck with that instance unless you get a new account on another server.
But the point is I think distributed computing not run by any company or government is a great idea and may be what takes off in the next few years. And if you think about it, email is a distributed technology too because you can send an email from Yahoo to Gmail and it will work fine, but that’s not the case with most social networks these days. Also, Facebook and Twitter do have a pretty strong hold on their audiences right now. But that could change as they continue to ignore their users pleas for privacy.
For my part I’m going to give Mastodon another try and see where it leads. Nice thing about Mastodon is that you automatically have a built in user base that sees all your toots (called local timeline). I’ll get back to you and tell you how it goes.
By the way, if you’re into programming you can request to join my server at https://x0r.be and I’ll be able to see your toots! But of course, not everybody is interested in programming. But there’s a fair number of servers out there and there are some general purpose ones that cover every subject. The thing about starting with a new social network is that you got to be motivated because not everybody wants to switch.
So The Super Rocket is a film I made with the Blanchet boys and is my first foray into short-film-making. Overall it turned out pretty well but has a lot of technical errors that my inexperienced filmmaker brain did not catch (One example is the lighting in one scene does not look natural). But it’s the best I can expect for a first film and overall I’m pretty happy with it.
For those of you who don’t know, The Super Rocket is about two boys who set out to build a “super” rocket which means they are building a high-power model rocket (watch the movie to find out what that means). But when one brother forgets about the other trouble ensues.
The script took me ages to write mostly because I was so inexperienced but that’s part of the learning process. The Super Rocket was not actually the first script I ever wrote. The first script I ever wrote was called “Panspermia On The Starways” and was about a scientist who was investigating a bogus story of life that was found on another planet. That script was a disaster but taught me a lot about screenwriting. Namely, that you really have to have a good idea to get started.
This film was shot on the original Blackmagic Pocket Cinema Camera which is probably the camera I will shoot my next film on (Though I would like to rent a better camera such as the BMPCC6K). The reason I chose the BMPCC is it’s superior “dynamic range” and “RAW” capabilities. The music was composed by me and I edited the film myself including the not-so-great sound. If you’re interested in watching the film you can see it below (don’t forget to leave a comment)!
So I’ve been interested in business and investing lately. I’m reading a book called “Rich Dad Poor Dad” by Robert Kiyosaki. It’s an inspirational read and has a lot of good lessons in it about managing money and general life lessons.
The last lesson which most impressed me was “Pay Yourself First”. Robert basically is saying to pay yourself (i.e. your savings account and other assets) before you pay others (i.e. your bills, things you have to pay). So Robert is not saying not to pay your bills. He is saying to set aside money that is only for one thing (savings, investments, whatever) and then come up with the money to pay the things you have to pay. Having to have more money at the end of the month is supposed to motivate you to find more ways to earn money.
I’ve been interested in investing lately and I have a little money in a savings account, but I’ve been digging into it to pay for some things when I run out of money in my checking account. Basically, I wonder if not touching my savings account at all, but rather adding to it would be a good idea? “Of course” you say, but it’s not an easy thing to do. Somehow you’ve got to come up with the money you don’t have now (preferably having earned it ahead of time and having it sitting in your bank account). But I would like to hear your thoughts on this thing of savings accounts. I would like to invest part of my savings some day in the stock market but haven’t had the money till now (I did spend about $5000 on my first film in equipment).
Overall, I’m really enjoying this book and I’m going to give it to my neighbor who wants to read it too (we did have some discussions about the book). I’m thinking I will want to read more of Robert’s books when I have the extra money (working on that now). But I do have another book which I am reading next called “Range: Why Generalists Triumph in a Specialized World” by David Epstein. Have you read this book? I’m thinking it will probably be good but you never know.
I would recommend “Rich Dad Poor Dad” to anyone interested in money, business, or life in general. And it’s really cheap in mass-market paperback too! You can buy the book on Amazon here.
Anyways, thank you for reading. I will try to respond to every comment (which I have not done in the past) so feel free to comment on this post.
So I recently became disenchanted with web development (serious web development) and wanted to find something more visual/creative to do. Not that web development doesn’t take creativity. I had tried to get accepted as a stock photographer to istockphoto.com years ago, in which you have to submit a few images so they can evaluate if you’re good enough and if your’re images are the type they want. So I decided to apply again because I thought my photography had probably improved and lo and behold, I was accepted!
So this is a new way to make money for me; I’m simply adding it to the other ways I make money (which currently consist of web design and doing yard work for neighbors). I’m hoping to pursue a career in photography so I’m not just interested in stock photography, but it’s a great way to make money sometimes (if you have a really good image that stands above the rest you take).
I decided I needed a website for my pursuit of this career so I created one yesterday and today that you can see at photography.timothygrindall.com (notice this blog is at wordpress.timothygrindall.com). It’s not a perfect or necessarily complete design but I can always change it. If you’re interested in seeing some of my best photography, check out the site!
I’d also love to get into portrait photography but I have plenty to keep me busy (though not to fill my days necessarily). I love taking pictures of people but I don’t get to do it so often because people are so camera shy (and so am I)!
I’m also hoping to get into filmmaking as part of my career. I own my own businesses so there’s no reason not to do more than one of them. I’m currently looking for clients who would like me to make advertising films/videos for free (just for the experience). So I have some ideas of what I want but other than that, I’m leaving it up to God. Worry about one’s future does no good.
I think getting a driver’s license will be a top priority so I can drive around Washington/Oregon/California to different places where I can take pictures. Also that will allow me to visit clients for webdesign and filmmaking jobs (and would really open things up for me).