Green Arrays, Inc - Chuck Moore's company

Posted 30 Jan 2010 at 20:27 UTC by badvogato Share This

"All that is required for evil to triumph is for good men to do nothing."

Chuck Moore now has a weblog and his own company Green Arrays Inc near Lake Tahoe from incline village.


The GreenArrays family of processors were designed to optimize prototyping and minimize production costs. They boast technology that gives them unparalleled versatility. They are small, fast, low-power and robust. And have a simple development environment based on colorForth. RAM, ROM, timing, wake-up, pads and serial, A/D and D/A ports are all unique.

Recharging Your Cellphone, Mother Nature's Way, posted 1 Feb 2010 at 18:56 UTC by sye » (Journeyer)


Forth, posted 2 Feb 2010 at 14:47 UTC by fzort » (Journeyer)

What's so special about it? It's like programming in assembly for a stack-based processor. You should try something a bit more exotic, like Prolog or APL. Or Unlambda.

Strongtalk, posted 2 Feb 2010 at 19:44 UTC by badvogato » (Master)

"As for the future: Strongtalk contains innovations that are still far ahead of virtually any existing mainstream language or VM. Now that Strongtalk is open source, the future is up to you!"

Forth vs. asm, posted 3 Feb 2010 at 10:59 UTC by chalst » (Master)

At first glance, Forth doesn't much better than assembler with a natty macro system, and indeed you can do a fair approximation of implementing Forth with just that.

The advantages of Forth over that are:

  1. It comes with a time-tried ideology of program construction, with a substantial literature that is not tied to any particular processor architecture, and that is indeed deployed on a wider family of architectures than most languages;
  2. In particular, what every Forth programmer worth their salt has done is carry out something a bit like study part of Structure and Interpretation of Computer Programs, particularly writing Forth metacircular interpreters and byte/machine-code compilers. Doing this isn't really considered part and parcel of mastering assembler.
  3. As a result, Forth programming makes rich use of metaprogramming: it's normal to write nontrivial language extensions as part of Forth code.
So Forth is both not more sophisticated, and far more sophisticated than assembler.

I think it makes an excellent choice of first programming language.

strongtalk, posted 6 Feb 2010 at 11:47 UTC by phr » (Journeyer)

Badvogato, where you thinking of StrongForth? ( It has been around for a while.

StrongTalk looks interesting but it's a Smalltalk dialect, nothing to do with Forth.

I wish good luck to Chuck Moore's new company, but I have to wonder what the market is these days for such devices, compared to commodity PIC or Atmel parts.

colorForth - a language tool-kit, posted 7 Feb 2010 at 21:34 UTC by badvogato » (Master)

phr, I didn't know there is a StrongForth so thanks.

"Charley, an enthusiastic supporter, recently said that colorForth is to Forth as Forth is to C. "

Question to CM: "Would you consider developing a new language from scratch?" CM: "No. I develop languages all the time. Each application requires one. But they’re all based on Forth. Forth has been called a language tool-kit. It is the starting point for an application language, much as the infant brain has the ability to learn any human language. Forth can do anything, but some things are easy. For example postfix notation. Many people, including me, have implemented infix notation. That’s not hard, but it doesn’t lead to any useful result. I believe in simple. I believe in efficient. And Forth embodies these."

I despair. Technology, and our very civilization, will get more and more complex until it collapses. There is no opposing pressure to limit this growth. No environmental group saying: Count the parts in a hybrid car to judge its efficiency or reliability or maintainability. All I can do is provide existence proofs: Forth is a simple language; OKAD is a simple design tool; GreenArrays offers simple computer chips. No one is paying any attention."

A case for complexity, posted 8 Feb 2010 at 11:56 UTC by fzort » (Journeyer)

:troll on

It's shocking how people take some things for granted, even more so when these people are supposedly technically knowledgeable and should know better. Uncompressing a JPEG image of a LOLcat is very complex . And people want their LOLcats embedded in properly rendered HTML, with anti-aliased fonts, on semi-transparent windows with drop shadows, while listening to music streaming through the internets. Surprise: all that comes at a price. It's easy to complain about too much complexity, the sorry state of software engineering, code bloat, and all that, but maybe the bloat is there for a reason. The alternative is to tell people that no, they don't need really their LOLcats. You can always try that, I suppose.

I'm not impressed with his argument about postfix notation. One could argue that we're taught to think in infix in school, so it's easier for us to use infix when describing an algorithm on paper, and translating that to code that uses postfix feels unnatural. But yeah, I'll concede that postfix is easier for a compiler to parse. But parsing is peanuts. It has been a solved problem since the 1950s. Parsing is a minuscule part of what a compiler like gcc does; the lion's share of gcc's processing consists of manipulating s-expressions (take that, LISP heads), trying to get at the best output code possible, which most of the time bears no resemblance at all with the original source, whether it came from a language that represents expressions with postfix, infix or prefix notation (gcc isn't limited to the C front-end; one can write a Forth front-end to gcc as well). Now you'll tell me that we don't really need gcc, with infix notation and a language that's "close to the metal" we can have simpler compilers, and, in fact, with a simple enough language, the original source code can run directly on hardware. Sure, wake me up when you make it render LOLcats.

By the way, back in the 1990s Sun applied its then-considerable engineering know-how on the design of a processor that could execute Java byte-code directly. Java byte-code, like Forth, is stack-based. It was supposed to take the embedded market by storm. It was an embarrassing flop.

If it's simplicity you're after, Turing-complete languages a lot simpler than Forth do exist.

:troll off

This was a troll, so don't get too angry.

fzort, posted 9 Feb 2010 at 15:44 UTC by badvogato » (Master)

you should be ex-communicated for daring to challenge the infallibility of "the pope with his funny hat like all popes do "

green arrays blog, posted 15 Feb 2010 at 00:37 UTC by phr » (Journeyer)

Green Arrays now has a blog:

It looks like they've released some new documentation for Array Forth recently.

I think chalst's comments are pretty accurate. I can understand how a Forth chip can be friendlier than (say) a PIC, to tiny applications implemented in what amounts to handwritten machine code. I don't know anything about the Java chip but I had the impression that it was supposed to support JVM-like security features, so it could run concurrent, mutually hostile applets in the same address space. That obviously would impose requirements that the Forth chips wouldn't face.

I don't quite understand the purpose of the Forth array chips though, especially the larger ones. If the GA4 (or even a single core version) were available today, I'd perhaps buy an evaluation kit to try it out in places where I might otherwise use a small Atmel or PIC. But I'm having a hard time seeing where to use the 144 core version, instead of a full blown DSP, GPGPU, or whatever.

Re lolcats: I think fast jpeg renderers do implement the discrete cosine transform in assembly code, to use the x86 SIMD vector instructions or in some cases an external GPU.

lolcats, posted 16 Feb 2010 at 00:08 UTC by fzort » (Journeyer)

phr: sorry for that, I was only pulling badvogato's leg. Seriously, all the best luck to Chuck! (Funny hat indeed.)

The complain about too much complexity or code bloat does irk me a bit, though. I used to think like that, too. I also used to write hand-coded assembly a lot. However, as I evolved as a techie (I still suck, but I'd like to think that at least some progress was made) I came to realize that it's there simply because people demand a lot. Anyway, he complexity in the systems we design is overrated. It's nothing compared to that of biological systems, which most of the time seem to work just fine.

notes while reading simon_acts09.pdf , posted 9 Apr 2010 at 19:25 UTC by badvogato » (Master)

Horst Simon
10th DOE ACTS Collection Workshop
August 20, 2009

PGAS (Partitioned Global Address Space?) Languages

Design for low power: more concurrency Tensilica DP 0.09W PPC450 3W, Power5 120W This is how iPhones and MP3 plyaers are designed to maximize battery life and minimize cost.

Low Power Design Principles * IBM Power5(Server) - 120W @ 1900MHz - baseline * Intel Core2 sc (laptop) - 15W @ 1000MHz - 4x more FLOPS/watt than baseline * IBM PPC 450 (BG/P - low power) - 0.625W @ 800MHz - 90x more * Tensilica XTensa (Moto Razor) - 0.09W @ 600MHz - 400x more Even if each core operates at 1/3 to 1/10th efficiency of largest chip, you can pack 100s more cores onto a chip and consume 1/20 the power.

Customization Continuum: * Application-driven does NOT necessitate a special purpose machine General Purpose: Cray XT3 Application Driven: BlueGene, Green Flash Special Purpose: D.E. Shaw Anton Single Purpose: MD Grape

Computational Science and Engineering (CSE) * CSE is not "just programming" ( and not CS) * Petaflop/s computing is necessary but not sufficient.

SciDAC - First Federal Program to Implement CSE (Scientific Discovery through Advanced Computing) created in 2001 (LBNL+UCB) largest recipient of SciDAC funding

Cryo-EM: Significance * Protein Structure determination is one of the building blocks for molecular biology research * Standard approach is to crystalize protein * However 30% of all proteins do not crystalize or are difficult to crystalize

The Reconstruction Problem Electron beam -> 3D macromolecule -> 2D projections (Cryo-EM) Can we reduce the 3-D of the molecule from a set of 2-D projection images with unknown relative orientations?

Challenge * Nonlinear inverse problem * Extremely hoisy data * Large colume of data

Mathematical Formulation "Unified 3-D Structural and Projection Orientation refinement Using Quasi-Newton Algorithm." C.Yang, E.Hg and P.A. Penczek. Journal of Structural Biology 149(2005) pp.53-64

Cryo-EM - Summary * The computer IS the microscop! * Image resolution is directly correlated to the available compute power. * Naive and complete ab initio calculation of a protein structure might require 10**(18) operations

Growth of Computing Power and "Mental Power" Evolution of Computer Power/Cost

Why This simplistic View is Wrong * Unsuitability of Current Architectures - Teraflop systems are focused on excelling in computing; only one of the six ( or eight) dimensions of human intelligence * Fundamental lack of mathematical models for cognitive processes - That's why we are not using the most powerful computers today for cognitive tasks. * complexity limits - We don't even know yet how to model turbulence, how then do we model thought?

Six Dimensions of Intelligence

The Power Conundrum * Unless there are new breakthroughs an Exaflops computer in 2019 will consume 120MW * The human brain operates at 20-40W * We will need another factor of 1 M to match the energy efficiency of the human brain

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Share this page