Older blog entries for johnw (starting at number 74)

The following article is just a few notes on the nature of the Free monad.

``````> {-# LANGUAGE DeriveFunctor #-}
> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> {-# LANGUAGE UndecidableInstances #-}
>
> module FreeMaybe where
>

There can be just two values of type `Maybe a`: `Nothing` and `Just a`. Now let’s look at the free monad of `Maybe a`, `Free Maybe a`:

``````> data Free f a = Pure a | Free (f (Free f a))
>
> instance Functor f => Functor (Free f) where
>     fmap f (Pure a)   = Pure (f a)
>     fmap f (Free ffa) = Free \$ fmap (fmap f) ffa
>
> instance Functor f => Monad (Free f) where
>     return = Pure
>     Pure a >>= f = f a
>     Free ffa >>= f = Free \$ fmap (>>= f) ffa
>
> instance (Show a, Show (f (Free f a))) => Show (Free f a) where
>     showsPrec d (Pure a) = showParen (d > 10) \$
>         showString "Pure " . showsPrec 11 a
>     showsPrec d (Free m) = showParen (d > 10) \$
>         showString "Free " . showsPrec 11 m``````

There are four “shapes” that values of `Free Maybe a` can take:

``````Pure a
Free Nothing
Free (Just (Free (Just (... (Free Nothing)))))
Free (Just (Free (Just (... (Free (Pure a))))))``````

In terms of whether a `Free Maybe a` represents an `a` or not, `Free Maybe a` is equivalent to `Maybe a`. However, `Maybe a` is right adjoint to `Free Maybe a`, meaning that it forgets the structure of `Free Maybe a` – namely, which of the four shapes above the value was, and how many occurences of `Free (Just` there were.

Why would you ever use `Free Maybe a`? Precisely if you cared about the number of Justs. Now, say we had a functor that carried other information:

``````> data Info a = Info { infoExtra :: String, infoData :: a }
>     deriving (Show, Functor)``````

Then `Free Info a` is isomorphic to if `infoExtra` had been `[String]`:

``````> main :: IO ()
> main = do
>     print \$ Free (Info "Hello" (Free (Info "World" (Pure "!"))))``````

Which results in:

``````>>> main
Free (Info {infoExtra = "Hello",
infoData = Free (Info {infoExtra = "World", infoData = Pure "!"})})
it :: ()``````

But now it’s also a `Monad`, even though we never defined a `Monad` instance for `Info`:

``````> main :: IO ()
> main = do
>     print \$ do
>         x <- Free (Info "Hello" (Pure "!"))
>         y <- Free (Info "World" (Pure "!"))
>         return \$ x ++ y``````

This outputs:

``````>>> foo
Free (Info {infoExtra = "Hello",
infoData = Free (Info {infoExtra = "World", infoData = Pure "!!"})})
it :: ()``````

This works because the Free monad simply accumulates the states of the various functor values, without “combining” them as a real monadic join would have done. `Free Info a` has left it up to us to do that joining later.

Syndicated 2013-09-23 00:00:00 from Lost in Technopolis

This article assumes familiarity with monads and monad transformers. If you’ve never had an occasion to use `lift` yet, you may want to come back to it later.

The Problem

What is the problem that `monad-control` aims to solve? To answer that, let’s back up a bit. We know that a monad represents some kind of “computational context”. The question is, can we separate this context from the monad, and reconstitute it later? If we know the monadic types involved, then for some monads we can. Consider the `State` monad: it’s essentially a function from an existing state, to a pair of some new state and a value. It’s fairly easy then to extract its state and later use it to “resume” that monad:

``````import Control.Applicative

main = do
let f = do { modify (+1); show <\$> get } :: StateT Int IO String

(x,y) <- runStateT f 0
print \$ "x = " ++ show x   -- x = "1"

(x',y') <- runStateT f y
print \$ "x = " ++ show x'  -- x = "2"``````

In this way, we interleave between `StateT Int IO` and `IO`, by completing the `StateT` invocation, obtaining its state as a value, and starting a new `StateT` block from the prior state. We’ve effectively resumed the earlier `StateT` block.

Nesting calls to the base monad

But what if we didn’t, or couldn’t, exit the `StateT` block to run our `IO` computation? In that case we’d need to use `liftIO` to enter `IO` and make a nested call to `runStateT` inside that `IO` block. Further, we’d want to restore any changes made to the inner `StateT` within the outer `StateT`, after returning from the `IO` action:

``````import Control.Applicative

main = do
let f = do { modify (+1); show <\$> get } :: StateT Int IO String

flip runStateT 0 \$ do
x <- f
y <- get
y' <- liftIO \$ do
print \$ "x = " ++ show x   -- x = "1"

(x',y') <- runStateT f y
print \$ "x = " ++ show x'  -- x = "2"
return y'
put y'``````

A generic solution

This works fine for `StateT`, but how can we write it so that it works for any monad tranformer over IO? We’d need a function that might look like this:

``````foo :: MonadIO m => m String -> m String
foo f = do
x <- f
y <- getTheState
y' <- liftIO \$ do
print \$ "x = " ++ show x

print \$ "x = " ++ show x'
return y'
putTheState y'``````

But this is impossible, since we only know that `m` is a `Monad`. Even with a `MonadState` constraint, we would not know about a function like `runTheMonad`. This indicates we need a type class with at least three capabilities: getting the current monad tranformer’s state, executing a new transformer within the base monad, and restoring the enclosing transformer’s state upon returning from the base monad. This is exactly what `MonadBaseControl` provides, from `monad-control`:

``````class MonadBase b m => MonadBaseControl b m | m -> b where
data StM m :: * -> *
liftBaseWith :: (RunInBase m b -> b a) -> m a
restoreM :: StM m a -> m a``````

Taking this definition apart piece by piece:

1. The `MonadBase` constraint exists so that `MonadBaseControl` can be used over multiple base monads: `IO`, `ST`, `STM`, etc.

2. `liftBaseWith` combines three things from our last example into one: it gets the current state from the monad transformer, wraps it an `StM` type, lifts the given action into the base monad, and provides that action with a function which can be used to resume the enclosing monad within the base monad. When such a function exits, it returns a new `StM` value.

3. `restoreM` takes the encapsulated tranformer state as an `StM` value, and applies it to the parent monad transformer so that any changes which may have occurred within the “inner” transformer are propagated out. (This also has the effect that later, repeated calls to `restoreM` can “reset” the transformer state back to what it was previously.)

With that said, here’s the same example from above, but now generic for any transformer supporting `MonadBaseControl IO`:

``````{-# LANGUAGE FlexibleContexts #-}

import Control.Applicative

foo :: MonadBaseControl IO m => m String -> m String
foo f = do
x <- f
y' <- liftBaseWith \$ \runInIO -> do
print \$ "x = " ++ show x   -- x = "1"

x' <- runInIO f
-- print \$ "x = " ++ show x'

return x'
restoreM y'

main = do
let f = do { modify (+1); show <\$> get } :: StateT Int IO String

(x',y') <- flip runStateT 0 \$ foo f
print \$ "x = " ++ show x'   -- x = "2"``````

One notable difference in this example is that the second `print` statement in `foo` becomes impossible, since the “monadic value” returned from the inner call to `f` must be restored and executed within the outer monad. That is, `runInIO f` is executed in IO, but it’s result is an `StM m String` rather than `IO String`, since the computation carries monadic context from the inner transformer. Converting this to a plain `IO` computation would require calling a function like `runStateT`, which we cannot do without knowing which transformer is being used.

As a convenience, since calling `restoreM` after exiting `liftBaseWith` is so common, you can use `control` instead of `restoreM =<< liftBaseWith`:

``````y' <- restoreM =<< liftBaseWith (\runInIO -> runInIO f)

-- becomes...
y' <- control \$ \runInIO -> runInIO f``````

Another common pattern is when you don’t need to restore the inner transformer’s state to the outer transformer, you just want to pass it down as an argument to some function in the base monad:

``````foo :: MonadBaseControl IO m => m String -> m String
foo f = do
x <- f

In this example, the first call to `f` affects the state of `m`, while the inner call to `f`, though inheriting the state of `m` in the new thread, but does not restore its effects to the parent monad transformer when it returns.

Now that we have this machinery, we can use it to make any function in `IO` directly usable from any supporting transformer. Take `catch` for example:

``catch :: Exception e => IO a -> (e -> IO a) -> IO a``

What we’d like is a function that works for any `MonadBaseControl IO m`, rather than just `IO`. With the `control` function this is easy:

``````catch :: (MonadBaseControl IO m, Exception e) => m a -> (e -> m a) -> m a
catch f h = control \$ \runInIO -> catch (runInIO f) (runInIO . h)``````

You can find many function which are generalized like this in the packages `lifted-base` and `lifted-async`.

Syndicated 2013-09-21 00:00:00 from Lost in Technopolis

A whirlwind tour of conduits

A whirlwind tour of conduits

While talking with people on IRC, I’ve encountered enough confusion around conduits to realize that people may not know just how simple they are. For example, if you know how to use generators in a language like Python, then you know pretty much everything you need to know about conduits.

The basics

Let’s take a look at them step-by-step, and I hope you’ll see just how easy they are to use. We’re also going to look at them without type signatures first, so that you get an idea of the usage patterns, and then we’ll investigate the types and see what they mean.

Everything in conduit begins with the `Source`, which `yield`s data as it is demanded. The dumbest possible form of source is an empty source:

``empty = return ()``

The next dumbest is a source that yields only a single value:

``single = yield 1``

In order to use any `Source`, I must ultimately connected it with a `Sink`. `Sink`s are nothing more than code which `await`s values from a `Source`. Let’s look at an example in Python, where these concepts are features of the language itself:

``````def my_generator():
for i in range(1, 10):
yield i

for j in my_generator():
print j``````

Here we have a generator (aka Source): a function which simply yields values. This generator is being passed to `for` statement that consumes the values from it and binds them one by one to a variable `j`. It then prints each value after it is consumed.

The equivalent code using conduit employs a different syntax, but the general “shape” of the code is the same:

``````import Control.Monad
import Data.Conduit

myGenerator = forM_ [1..9] yield

main = myGenerator \$\$
whileJust_ await \$ \j ->
liftIO \$ print j``````

I can make the code a little bit closer to Python’s example (making the call to `await` implicit) if I use `Data.Conduit.List`:

``````import Control.Monad
import Data.Conduit
import qualified Data.Conduit.List as CL

myGenerator = forM_ [1..9] yield

main = myGenerator \$\$
CL.mapM_ \$ \j ->
liftIO \$ print j``````

Just regular code

Neither `Source`s nor `Sink`s have to be special functions, however. They are just regular code written in the `ConduitM` monad transformer:

``````import Data.Conduit

main = do
(do yield 10
yield 20
yield 30)
\$\$
(do liftIO . print =<< await
liftIO . print =<< await
liftIO . print =<< await
liftIO . print =<< await)``````

Each time `await` is called, it returns a value that was `yield`ed by the source wrapped in `Just`, or it returns `Nothing` to indicate the source has no more values to offer.

There, now you know the basics of the conduit library.

Conduits

Between sources and sinks, there is a third kind of conduit, which is actually called just `Conduit`. A `Conduit` sits between sources and sinks, and is able to call both `yield` and `await`, applying some kind of transformation or filter to the data coming from the source, before it reaches the sink. In order to use a `Conduit`, you must fuse it to either a source or a sink, creating a new source/sink which has the action of the `Conduit` bound to it. For example:

``````import Data.Conduit

main = do
(do yield 10
yield 20
yield 30)
\$=
(do whileJust_ await \$ \x ->
yield (x * 2))
\$\$
(do liftIO . print =<< await
liftIO . print =<< await
liftIO . print =<< await
liftIO . print =<< await)``````

This example fuses a conduit that doubles the incoming values from the source to its left. We could equivalently have fused it with the sink to the right. In most cases it doesn’t matter whether you fuse to sources or to sinks; it mainly comes into play when you are using such fusion to create building blocks that will be used later.

Use the types, Luke

Now that we have the functionality of conducts down, let’s take a look at their types so that any errors you may encounter are less confusing.

A source has the type `Source m Foo`, where `m` is the base monad and `Foo` is the type of what you want to pass to `yield`.

A sink has the corresponding type `Sink m Foo a`, to indicate that `await` returns values of type `Maybe Foo`, while the monadic operation of the sink returns a value of type `a`.

A conduit between these two would have type `Conduit Foo m Foo`.

You’re probably going to see the type `ConduitM` in your types errors too, since the above three are all synonyms for it. It’s a more general type that these three specialized types. The correspondences are:

``````type Source m o    = ConduitM () o m ()
type Sink i m r    = ConduitM i Void m r
type Conduit i m o = ConduitM i o m ()``````

The `Void` you see in there is just enforcing the fact that sinks cannot call `yield`.

What’s next?

Beyond this, most of the conduit library is a bunch of combinators to make them more convenient to use. In a lot of cases, you can reduce conduit code down to something which is just as brief and succinct as what you might write in languages with native support for such operations. It’s a testiment to Haskell, rather, that it doesn’t need to be a syntactic feature to be both useful and concise.

And what about `pipes`, and the other competing libraries in this space? In many ways they are each equivalent to what I’ve described above. If you want to use `pipes`, just write `respond` and `request` instead of `yield` and `await`, and you’re pretty much good to go! The operators for binding and fusing are different too, but what they accomplish is likewise the same.

If you’re interested in learning more about conduit and how to use it, check out the author’s own tutorial.

Syndicated 2013-07-16 00:00:00 from Lost in Technopolis

Update of gitlib libraries on Hackage, plus git-monitor

Update of gitlib libraries on Hackage, plus git-monitor

I’ve decided after many months of active development to release version 1.0.1 of gitlib and its related libraries to Hackage. There is still more code review to done, and much documentation to be written, but this gets the code out there, which has been working very nicely at FP Complete for about six months now.

The more exciting tool for users may be the `git-monitor` utility, which passively and efficiently makes one-minute snapshots of a single Git working tree while you work. I use it continually for the repositories I work on during the day. Just run `git-monitor -v` in a terminal window, and start making changes. After about a minute you should see commit notifications appearing in the terminal window.

Syndicated 2013-06-30 00:00:00 from Lost in Technopolis

Nightly builds of GHC HEAD for Ubuntu 12.04.2 LTS

Nightly builds of GHC HEAD for Ubuntu 12.04.2 LTS

Chatting with merijn on #haskell, I realized I have a file server running Ubuntu in a VM that’s idle most of the time, so I decided to set up a jenkins user there and make use of it as a build slave in the evenings. This means that at http://ghc.newartisans.com, you’ll now find nightly builds of GHC HEAD for Ubuntu as well (64-bit). It also includes fulltest and nofib results for each build.

Syndicated 2013-06-19 00:00:00 from Lost in Technopolis

Until the Comonad Reader comes back online, I have a temporary mirror setup at http://comonad.newartisans.com. It’s a bit old (Sep 2012), but has some classics like “Free Monads for Less”. It is missing the “Algebra of Applicatives”, though, since I hadn’t run the mirror in a while.

Syndicated 2013-06-19 00:00:00 from Lost in Technopolis

15 Jul 2010 (updated 15 Jul 2010 at 17:07 UTC) »

After spending a good while trying to understand monads in Haskell, and why the Haskell world is so fascinated by them, I finally understand why they aren’t as exciting to other languages, or why they are completely missing from languages like C++: because they’re mostly already there.

At its simplest, a monad is an abstraction of a value which knows how to apply functions to that value, returning a new monad. In other words, it’s a way to turn values into little packages that wrap additional functionality around that value. Sounds a lot like what an object does…

But this doesn’t tell you what’s exciting about them, from Haskell’s point of view. Another way of looking at them, without going into the wheres and whys, is this: In a lazily-evaluated, expression-based language, monads let you express sequenced, interdependent computation.

Consider the following two code examples. First, in C++:

```  ```#include <iostream>
int main() {
std::cout << "Hello, world!"
<< "  This is a sample"
<< " of using a monad in C++!"
<< std::endl;
return 0;
}```
```

And the same code in Haskell:

```  ```module Main where
main :: IO ()
main = do putStr "Hello, world!"
putStr "  This is a sample"
putStr " of using a monad in C++!"
putStr "\n"```
```

What the IO monad in the second example is doing is making the sequenced evaluation of the print statements possible using a nice, normal looking syntax. The C++ code doesn’t need monads to do this, because it already embodies the concept of abstracted values (here, the iostream passed between insertion operators) and sequenced computation (because it’s not lazy).

1. Monads are abstractions of values. So are most C++ objects.

2. Monads permit functions to be applied to the “contained” value, returning a a new version of the monad. C++ objects provide methods, where the mutated object is the new version.

3. Monads provide a way to encapsulate values in new monads. C++ objects have constructors.

As another example, consider the case where you have to call five functions on an integer, each using the return value of the last:

```  `j(i(h(g(f(10))))`
```

This is an identical operation in both Haskell and C++. But what if the return value of each function wasn’t an integer, but an “object” that could either be an integer, or an uninitialized value? In most languages, there’s either a type, or syntax, for this concept:

```  ```C++      boost::optional<int>
C#       int?
Java     Integer
```

If each function returns one of these, but takes a real integer, it means we have to check the “null” status of each return value before calling the next function. In C++ this leads to a fairly common idiom:

```  ```if (boost::optional<int> x1 = f(10))
if (boost::optional<int> x2 = g(*x1))
if (boost::optional<int> x3 = h(*x2))
if (boost::optional<int> x4 = i(*x3))
j(*x4);```
```

Note that not only are these calls sequential, but due to the meaning of optionality, they are also inherently short-circuiting. If `f` returns `none`, none of the other functions get called.

Haskell can do this type of thing natively as well, and it looks similar:

```  ```case f 10 of
Nothing -> Nothing
Just x1 ->
case g x1 of
Nothing -> Nothing
Just x2 ->
case h x2 of
Nothing -> Nothing
Just x3 ->
case i x3 of
Nothing -> Nothing
Just x4 -> j x4```
```

But it’s ugly as sin. In C++, we can be evil and flatten things out using basic features of the language, assuming we pre-declare the variables:

```  ```(   (x1 = f(10))
&& (x2 = g(*x1))
&& (x3 = h(*x2))
&& (x4 = i(*x3))
&& (x5 = j(*x4)), x5)```
```

Or you can eliminate the use of temporaries altogether by creating a wrapper class:

```  ```template <typename T> struct Maybe {
boost::optional<T> value;

Maybe() {}
Maybe(const T& t) : value(t) {}
Maybe(const Maybe& m) : value(m.value) {}

Maybe operator>>(boost::function<Maybe<T>(const T&)> f) const {
return value ? f(*value) : *this;
}
};```
```

If we change our functions to return `Maybe<int>` instead of just `boost::optional<T>`, it allows us to write this:

```  `f(10) >> g >> h >> i >> j`
```

Which in Haskell is written almost the same way:

```  `f 10 >>= g >>= h >>= i >>= j`
```

But where Haskell needs Monads to make this type of thing reasonable and concise, C++ doesn’t. We get passing around of object state between function calls as part of the core language, and there are many different ways to express it. However, if you confined C++ to function definitions and return statements only – where all function arguments were pass-by-value – then things like Monads would become an essential technique for passing knowledge between calls.

So it’s not that you can’t use Monads in C++, it’s just that they require enough extra machinery, and aren’t unique enough compared to core features of the language, that there isn’t the same level of motivation for them as there is in Haskell, where they can really add to the expressiveness of code.

Syndicated 2010-07-15 12:20:06 (Updated 2010-07-15 16:50:38) from Lost in Technopolis

30 Oct 2009 (updated 10 Nov 2009 at 09:12 UTC) »

A C++ gotcha on Snow Leopard

I’ve seen this issue mentioned in some random and hard to reach places on the Net, so I thought I’d re-express it here for those who find Google sending them this way.

On Snow Leopard, Apple decided to build g++ and the standard C++ library with “fully dynamic strings” enabled. What this means for you relates to the empty string.

When fully dynamic strings are off (as was true in Leopard), there exists a single global variable representing the empty string. This variable lives in the data segment of `libstdc++`, and so it does not exist on the heap. Whenever a string is deconstructed, the standard library would check whether that string’s address matches matches the empty string’s: if so, it does nothing; if not, it calls `free`.

With fully dynamic strings on, there is no global empty string. All strings are on the heap, and once their reference count goes to zero, they get deallocated. Where this creates a problem is if you mix and match code. If a library that does have fully dynamic strings enabled (aka the standard library) receives an empty string from code which does not have it enabled (aka, the app you just built), it will try to free it and your application will crash.

Here’s a reproducible case for this issue using Boost:

```  ```#include <string>
#include <sstream>
#include <boost/variant.hpp>

int main()
{
std::ostringstream buf;
boost::variant<bool, std::string> data;
data = buf.str();
data = false;
return 0;
}```
```

In this case – which really happened to me – I created an empty string by calling `ostringstream::str()`. Since I don’t have fully dynamic string on, its address is in data space, not on the heap. I pass this string to `boost::variant`, which makes a copy of that address. Later, when the variant is reassigned `false`, it calls `~basic_string` to deconstruct the string. Since my standard library is compiled with fully dynamic strings, the destructor for `basic_string` doesn’t recognize that its the “special” empty string, so it tries to free it.

The solution to this problem is three-fold:

1. You must be using the `g++` that comes with Xcode, or if you build your own (say, via MacPorts), you must configure it using `--enable-fully-dynamic-string`. I’ve already submitted a patch to this effect to the MacPorts crew.

2. All libraries must be compiled with `-D_GLIBCXX_FULLY_DYNAMIC_STRING`.

3. Your own code must be compiled with `-D_GLIBCXX_FULLY_DYNAMIC_STRING`.

You’ll know if this issue is biting you by looking at a stack trace in gdb. You’ll see a crash somewhere inside basic_string’s `_M_destroy` (which calls `free`). Move up the trace a bit and check whether the string it’s trying to free is 0 bytes long.

To recap: what’s happened is that an empty string constructed by code without fully dynamic strings got deallocated by code that was. That is, most likely you, or a library you built, handed an empty `std::string` to the system library.

Syndicated 2009-10-30 09:35:32 (Updated 2009-11-10 08:12:39) from Lost in Technopolis

Branch policies with Git

I’ve been managing my Ledger project with Git for some time now, and I’ve finally settled into a comfortable groove concerning branches and where to commit stuff.

Essentially I use four branches, in increasing order of commit frequency. Each branch has its own policy and purpose, which are described below.

maint

Every release of Ledger is made from the maint branch, and every commit on that branch is potentially a release. This means that no commit is made until some serious vetting takes place. When the master branch is at a state where I want to finally release it, I merge with =–no-ff=, so the merge gets represented as a single commit on the maint branch. Then I tag the release and make a distribution tarball.

It’s possible after a release that patches need to get applied to maint, and a point release made. Once this is done, the applicable patches are either merged into master, or if the two diverse too greatly I will begin cherry-picking instead. Once cherry-picking starts, no more merges into master will occur until after the next release merge happens in maint.

The purpose of maint is to provide the most stable release possible to the public.

master

Master is where most people get the latest source code from, so it is kept reasonable stable. There is a commit hook which guarantees that all commits to this branch build and pass the test suite. Since most development work happens on “next”, each time next is stable I merge into master, using =–no-ff= to keep the merge commits together. I also use =–no-commit=, so the merge must pass the commit hook in order to go in.

Note that no commits are ever made directly to master, unless I’ve seriously broken something that needs to be addressed sooner than the next merge from “next”. In that case, I’ll cherry pick this commit into master afterward. Merges only happen into master from next, and only from master into maint.

The purpose of master is to provide reasonably stable development snapshots to the public.

next

The next branch is where I commit most often, and while I try to keep it functional, this is not always the case. I don’t run unit tests here for every commit, just before every push (mostly). Most of my friends follow this branch, because it updates very often.

The purpose of next is to provide potentially unstable, frequent development snapshots to the public.

test

The test branch comes in and out of existence, and should only ever be pulled using =pull –rebase=. It contains trial commits that I want someone to test out. It’s a delivery branch, and after it’s been used I either delete it or ignore it until the next time it’s necessary.

The purpose of test is to communicate patch candidates to a particular person at a particular time.

topic

Then there are the various local-only topic branches that live on my machine, in which I develop highly unstable code relating to one feature or another, awaiting the day when it becomes stable enough to be merge into “next”.

Syndicated 2009-10-29 03:05:57 (Updated 2009-10-29 06:23:55) from Lost in Technopolis

Response to PG's "How to Do Philosophy"

Back in late 2007, Paul Graham put up an essay titled “How to Do Philosophy”, in which Mr. Graham hoped to elucidate where Philosophy went wrong and why the field, as now practiced, must be renovated to remain useful. In fact, he goes so far as to suggest that much of philosophy has no benefit whatsoever:

The proof of how useless some of their answers turned out to be is how little effect they have. No one after reading Aristotle’s Metaphysics does anything differently as a result.

If I may, as a student of philosophy, I would like to offer my response to this argument, whose tenants have been repeated many times throughout Philosophy’s history.

The spirit of philosophy

As far back as Plato’s Republic (and most likely long before then) there have been debates on the merit of philosophy. In Plato’s book it is between Socrates and Glaucon, who fears that men may waste their time in fruitless contemplation:

Socrates: I am amused, I said, at your [Glaucon’s] fear of the world, which makes you guard against the appearance of insisting upon useless studies; and I quite admit the difficulty of believing that in every man there is an eye of the soul which, when by other pursuits lost and dimmed, is by these purified and re-illumined; and is more precious far than ten thousand bodily eyes, for by it alone is truth seen….

Earlier Socrates had said something similar, and in briefer terms:

Socrates: That the knowledge at which geometry aims is knowledge of the eternal, and not of aught perishing and transient.

Glaucon: That, he replied, may be readily allowed, and is true.

Socrates: Then, my noble friend, geometry will draw the soul towards truth, and create the spirit of philosophy, and raise up that which is now unhappily allowed to fall down.

This “spirit of philosophy” is held by Socrates over and over again to be precious beyond compare: a light to illumine every aspect of life. If a lantern is something you can design, hold and weigh, yet this light is its intangible counterpart, granting the lamp its purpose. It is the “why” to the lantern’s “what” and “how”. It can neither be designed, nor held, nor weighed, but must be enkindled. And only then does the lamp come aglow…

The harp on practicality

I understand the need for practical results in a material world, but results are meaningless deprived of context. If we boil things down to their material essence, then what we do we do for survival: develop resources to protect and prolong life. But is surviving enough? Don’t people also seek meaning from what they do? Certainly I don’t enjoy programming merely to make a paycheck; I have to feel something more to keep me motivated year after year.

The harp on practicality levied against philosophy overstresses the “what” against the “why”. Mr. Graham debates how to make philosophy useful again, but I think he has lost the point of it: useful in terms of what? Does usefulness have a “why”? Who is to define the best “use” of anything, so that usefulness may be measured? Thus, there is a conundrum at this center of his argument: How can any man judge philosophy who has not discovered what it aims to impart?

Anyone can understand the concept of practicality. Even children connect the ideas of work and output. It’s why we hate cleaning our room, because it takes so much work yet we gain so little from it. But what is pratical is not the same as what is essential. Happiness, most of us know, is not found in more money, more power, or by more efficient processes. There is only one outcome in this life which is inevitable, and curiously neither industry nor indolence has any effect on its timing or nature. But whereas the practical man fears death as the end of opportunity, perhaps the philosopher sees it differently:

Socrates: The philosopher desires death – which the wicked world will insinuate that he also deserves: and perhaps he does, but not in any sense which they are capable of understanding. Enough of them: the real question is, What is the nature of that death which he desires? Death is the separation of soul and body – and the philosopher desires such a separation. He would like to be freed from the dominion of bodily pleasures and of the senses, which are always perturbing his mental vision. He wants to get rid of eyes and ears, and with the light of the mind only to behold the light of truth….

So the question is raised: Is there more than just this world? I don’t necessarily mean physical death, either. For there is a world of purely material pursuits and achievement – a world we share in common with animals – and there is a world of inspiration, abstraction, and fantasy, which only men participate in. The “practical man” knows well the value of practical things and he is an expert at perfecting the animal life; but it takes more than a well-fed stomach to bring true content. If not so, then cows should be our kings.

If a philosopher is anything, I say he is someone who forgoes all else to discover and adventure in that world, and to learn what effect immaterial consequences should have on our material life, if all is to be as it ought.

The bane of method

Not everyone who reads Plato, of course, comes away with mystical opinions. Just as there are those who eschew philosophy entirely and ignore its delights, so there are some who accept it but half-way. They see that philosophy prescribes a method and they fall in love with that method, dedicating the whole of their pursuit to refining it. Yes, Plato did stress the necessity of dialectic, but his stress had a purpose in mind. Not a material or pratical goal – hardly even a “useful” one in immediate terms – but a personal and soulful one.

Philosophy is ever so much more than method. In fact, the love of method has resulted in a few branches of philosophy which are hardly philosophy at all, but the art of analysis. What Plato used his method for was to approach noesis: to know the “real real”, to have a direct apprehension of reality freed from mortal conceptions; to “remember” the soul’s birth and origin; to return our perception of the world to an original, direct perception of Truth itself. Through this experience of true perception our breasts and minds would dilate, and every pursuit will become infused with the vibrating principle of Life.

Missing the point

This is why, when I read essays like Mr. Graham’s, I find myself thinking that his own success and momentum have caused him to miss the point. Philosophy is not meant to be practical. It is not meant to have a use. It does not exist to make us more productive girls and boys. It is a diet of words to feed our soul by way of stimulating our mind. It is not a roast-beef sandwich, but more the substance of an ethereal longing.

Some will ask, what is this thing that is words and nothing more? To them I reply: Then what is poetry? There are human endeavors which are little more than words or pigments on paper, that come to life only through the eye of an appreciate heart and mind. Does a man read Shakespeare and ask what profit he has gained? If he does then he cannot see the point. What he gains is immaterial – literally and figuratively – but may in the long run be immensely valuable. It depends on what he saw, how well he saw, and the breadth of his vision.

It is no different with Philosophy. Consider it an artform, or a method of tuning the soul through delicate adjustments of the mind. When one tunes a violin there is no melody played; that comes after. The fruit of philosphy is the philosopher’s life itself. It is how it changes the man that matters, not the changes he can prove to you from day to day.

So if you are accustomed to reading balance sheets and preparing quarterly projections, perhaps you are ill-equipped to judge philosophy. But if you measure the smile of a happy engineer against the despair of an endless, daily grind, maybe then you will have found the weight of philosophy’s fruit.

Syndicated 2009-05-13 20:04:27 (Updated 2009-05-13 20:04:43) from Lost in Technopolis

65 older entries...