You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,16 @@
1
1
CellularAutomataAsNeuralNetwork
2
2
===============================
3
-
This was the project I wrote while learning C# for the first time, many years ago, and I also wrote an undergraduate thesis on the mathemtics of converting outer totalistic cellular automata into a neural network. It is pretty common to convert one computational system into another, for various reasons(perhaps the tools you have at hand are better at processing the second computational system), therefore it is potentially useful to know how to convert one system to another. Whether this particular conversion has practical applications is unknown to me. One possible application would be if you had a system that you were able to model satisfactorily with a cellular automata, but wanted to use a specialized neural network chip(a chip specialized for processing neural networks which would perform many times faster than a conventional chip at the same task) or perhaps in the future a similar memristor chip. This is pure speculation on my part, as the firing function my neural network uses is not a simple threshold. It has several thresholds firing zones. For Conway's Game of Life, the firing threshold is between .28 and .4, so the input neuron connections must be within that range, as opposed to a more traditional firing theshold of just bein grater than an amount. More complex cellular automata that you design with my program might have several zones.
3
+
This was the project I wrote while learning C# for the first time, many years ago, and I also wrote an undergraduate thesis on the mathematics of converting outer totalistic cellular automata into a neural network. It is pretty common to convert one computational system into another, for various reasons(perhaps the tools you have at hand are better at processing the second computational system), therefore it is potentially useful to know how to convert one system to another. Whether this particular conversion has practical applications is unknown to me. One possible application would be if you had a system that you were able to model satisfactorily with a cellular automata, but wanted to use a specialized neural network chip(a chip specialized for processing neural networks which would perform many times faster than a conventional chip at the same task) or perhaps in the future a similar memristor chip. This is pure speculation on my part, as the firing function my neural network uses is not a simple threshold. It has several thresholds firing zones. For Conway's Game of Life, the firing threshold is between .28 and .4, so the input neuron connections must be within that range, as opposed to a more traditional firing theshold of just bein grater than an amount. More complex cellular automata that you design with my program might have several zones.
4
4
5
5
Runs John Conway's Game of Life or any other outer totalistic cellular automata you define, but internally cells states are represented as neuron firing states, and rules are converted to neuron waits and firing function thresholds.
6
6
7
7
Disclaimer
8
8
==========
9
9
This application was originally written against .NET 1.1, and therefore by today's standards might be considered poorly written, as there is a good bit of non-generic collections which aren't typesafe(I actually had an internal conflict about this, as I was remotely familiar with Java which at the time DID have generics, and I was frustrated by the lack in C#). It was also my very first C# application, and I am posting only because it 1) Is pretty fun to play with and make your own cellular automata 2) I spent alot of time optimizing it so it would run with exceptable performance 3) Many people who have seen it in action have asked for the source code.
10
10
11
-
Additionally, modelling nueral networks varies by a great degree in terms of both artificial vs natural modelling, as well as simplistic to complex/accurate modelling. Natural neurons usually do not have quite so distinct states as firing and not firing, My usage of the term neural network refers as at the simplistic artificial end of the spectrum.
11
+
Additionally, modelling neural networks varies by a great degree in terms of both artificial vs natural modelling, as well as simplistic to complex/accurate modelling. Natural neurons usually do not have quite so distinct states as firing and not firing, My usage of the term neural network refers as at the simplistic artificial end of the spectrum.
12
12
13
-
If you flame me for coding style or architecture, I will point you to this disclaimer and you must admit you are an idiot for basically flaming someone for something that any sane person would expect to have mistakes(given that it was my first C# application and also written before many of todays C# language constructs existed).
13
+
If you flame me for coding style or architecture, I will point you to this disclaimer and you must admit you are an idiot for basically flaming someone for something that any sane person would expect to have mistakes(given that it was my first C# application and also written before many of today's C# language constructs existed).
14
14
15
15
Known Bug
16
16
=========
@@ -26,17 +26,17 @@ Framework: Code is compatible with Microsoft .NET 1.1, in the current solution I
26
26
27
27
The main challenge was getting all those squares to update each time step in a smoothly and quickly. This was initially painfully slow on computer hardware 10 years ago. This simple architecture resolved this issue.
28
28
29
-
A background thread(Modelling.cs) models the neural network and adds updates to a queue, while the UI(UIForm.cs) thread consumes that queue and displays the updates via double buffered GDI. Even though I had a single core processor at the time you might think multithreading would convey no benefits, but updating a UI thread from a background thread usually requires marshelling the calls, which degrades performance. Additionally, if this application were single threaded, the UI would often freeze while the neural network was updating(this can be mitigated in a *hackish* way by judicial use of DoWork). However, using a queue to communicate updates from one thread to another avoids making marshelled UI thread calls and allows the UI to remain responsive while the other thread performs neural network modelling.
29
+
A background thread(Modelling.cs) models the neural network and adds updates to a queue, while the UI(UIForm.cs) thread consumes that queue and displays the updates via double buffered GDI. Even though I had a single core processor at the time you might think multithreading would convey no benefits, but updating a UI thread from a background thread usually requires marshaling the calls, which degrades performance. Additionally, if this application were single threaded, the UI would often freeze while the neural network was updating(this can be mitigated in a *hackish* way by judicial use of DoWork). However, using a queue to communicate updates from one thread to another avoids making marshaled UI thread calls and allows the UI to remain responsive while the other thread performs neural network modelling.
30
30
31
-
Both neural networks(see disclaimer on terminology) and simple cellular automata operate in 2 distinct state of alive/firing and dead/not-firing. Both also consist of many identical units, neurons and cells, which all operate simulatenously. Since a 64x64 grid of cells consists of over 4000 cells, we cannot have simultaneous execution of all 4000 cells without a 4000 core processor. In the abscence of such a processor, we must simulate this simulatenous execution by maintaining two models of the system: the current time step, and the next time step(or previous and current depending on how you look at it). The current time step has the states of all 4000 cells, whether they are alive or dead. As we calculate the state of the cell in the next time step, we must use the states of neighbors in the current time step, but update the state of the cell only in the next time step. Otherwise, if we didn't have a next and current, imagine if
31
+
Both neural networks(see disclaimer on terminology) and simple cellular automata operate in 2 distinct state of alive/firing and dead/not-firing. Both also consist of many identical units, neurons and cells, which all operate simultaneously. Since a 64x64 grid of cells consists of over 4000 cells, we cannot have simultaneous execution of all 4000 cells without a 4000 core processor. In the absence of such a processor, we must simulate this simultaneous execution by maintaining two models of the system: the current time step, and the next time step(or previous and current depending on how you look at it). The current time step has the states of all 4000 cells, whether they are alive or dead. As we calculate the state of the cell in the next time step, we must use the states of neighbors in the current time step, but update the state of the cell only in the next time step. Otherwise, if we didn't have a next and current, imagine if
32
32
-We must go cell by cell to update the states of all cells from timestep 1 to timestep 2.
33
33
-Cell A in time step 1 is alive, but in timestep 2 we calculate that it is dead based on inputs from neighboring cells, and update its state.
34
34
-Cell B in time step 1 is dead, and based on the inputs of its neighbors, we calculate its state in timestep 2. The states we use from its neighbors should all be their states in timestep 1, however, we already updated Cell A, and therefore are using an invalid future state for the calculation.
35
35
Thus you see why it is so important to "buffer" these changes until you have processed all cells, then swap the next time step with the current.
36
36
37
-
There are some thread synchronization techniques demonstrated in this project. Most of them I learned from reading articles at the time, so hopefully they are not poor examples. However, I will say in my many years of programming, while I have used background threads, I have usually designed my applications to avoid any need for this kind of low level thread synchronization *because it is painful, tedious, and error prone to write*. I generally avoid writing low level thread sync code by instead using constructs of background threads of the .NET framework and events like ProgressChanged/RunWorkercompleted, or the new async/await constructs added to C#. I'm not saying low level thread sync' is obsolete, but many common scenarios are more easily implemented using other techniques. Low level thread synchonization is probably still applicable where your needs don't fit one of those scenarios, and you feel you could *sigificantly improve performance*. This implies your that there is both a large potential performance gain to be had, *and* your skills make you capable of reallizing those gains without also introducing thread safety bugs. For example, this application has a bug that causes the process to often remain open when the window is closed, which I suspect is related to the background thread.
37
+
There are some thread synchronization techniques demonstrated in this project. Most of them I learned from reading articles at the time, so hopefully they are not poor examples. However, I will say in my many years of programming, while I have used background threads, I have usually designed my applications to avoid any need for this kind of low level thread synchronization *because it is painful, tedious, and error prone to write*. I generally avoid writing low level thread sync code by instead using constructs of background threads of the .NET framework and events like ProgressChanged/RunWorkercompleted, or the new async/await constructs added to C#. I'm not saying low level thread sync' is obsolete, but many common scenarios are more easily implemented using other techniques. Low level thread synchronization is probably still applicable where your needs don't fit one of those scenarios, and you feel you could *significantly improve performance*. This implies your that there is both a large potential performance gain to be had, *and* your skills make you capable of realizing those gains without also introducing thread safety bugs. For example, this application has a bug that causes the process to often remain open when the window is closed, which I suspect is related to the background thread.
38
38
39
-
Binary serialization is demonstrated, showing how to save the state of the cells and rules to a file, and then later load that file to restore those cells/rules. It is generally a better practice to use some other serialization such as XML or JSON(my personal favorite) which is human readable. The reason this is favored over binary is because it allows others lok at the file and easily see its structure, and thus create programs to read these files and modify them in an automated way, or even manually edit them in a text editor. Perhaps you wanted to make a HTML5 or java applet that can load rules created in my application, and display the ceullar automata in a browser. If the file is in a human readable format, it makes it much easier for you to determine its structure and thus write code to load it. Additionally most other languages have libraries that support JSON and XML. A binary format, while not impossible to figure out, would be much more time consuming and error prone to reverse engineer. However, binary formats load/save generally faster and take up less space, but some of today's JSON processors are very fast and the performance benefits of binary are becoming negligible.
39
+
Binary serialization is demonstrated, showing how to save the state of the cells and rules to a file, and then later load that file to restore those cells/rules. It is generally a better practice to use some other serialization such as XML or JSON(my personal favorite) which is human readable. The reason this is favored over binary is because it allows others look at the file and easily see its structure, and thus create programs to read these files and modify them in an automated way, or even manually edit them in a text editor. Perhaps you wanted to make a HTML5 or java applet that can load rules created in my application, and display the cellular automata in a browser. If the file is in a human readable format, it makes it much easier for you to determine its structure and thus write code to load it. Additionally most other languages have libraries that support JSON and XML. A binary format, while not impossible to figure out, would be much more time consuming and error prone to reverse engineer. However, binary formats load/save generally faster and take up less space, but some of today's JSON processors are very fast and the performance benefits of binary are becoming negligible.
0 commit comments