From khaliff@astercity.net Fri May 4 08:21:54 2001
Date: Fri, 4 May 2001 09:21:54 +0200 (CEST)
From: Wojciech Moczydlowski, Jr khaliff@astercity.net
Subject: "Lambda Dance", Haskell polemic,...
On Tue, 3 Apr 2001, Brian Boutel wrote:
> they do the job better that the proprietary competition. Why then has
> Haskell not done as well as sendmail or apache? Perhaps because the
> battleground is different. To get people to adopt Haskell (or any other
> brian
IMO, what's also important, is an infamous memory consumption. Everybody
seems to ignore it, but by now I wouldn't use Haskell in a commercial product,
because of this little inconvenience. For me, it doesn't matter much if a
language is slow - as far as it's not very slow, it's ok. More important for
me is the predictability. I have to know how much memory my program will
eat. And in Haskell, with ghc the only sure answer is: "Very much". Things
are better with nhc, true, though I haven't tried yet to use its impressive
profiling tools. Hbc isn't alive, so there's no point in speaking about it.
The other thing, which somebody has mentioned, is a steep learning curve.
Most of us has understood and embraced the monads - in fact, I consider do
notation as one of the more important things in Haskell. Yet it took me some
time to understand it - and when I tried, I was a CS student. IMVHO monads
are a difficult concept to grasp. And simple writing to the screen or
reading text from keyboard enforces us to use IO. I don't know what to do to
make them easier to understand. Perhaps these new papers make it better.
The lack of standard arrays, with update in O(1) time, and hashtables is
another thing. Every time I write a larger program, I use lists - with an
unpleasant feeling that my favourite language forces me to use either
nonstandard extensions or uneffective solutions. I think that IO
arrays/hashtables should be in standard. Because they are in IO - they could
work efficiently. Yes, I know about MArray/IArray. Yet I couldn't find
information in sources about complexity of array operations.
And they aren't Haskell 98 compliant.
I'm also curious about what you think about the purpose of Haskell. I
personally consider it *the* language to write compilers.
But what can you do in it besided, that wouldn't be as easy in other languages?
(I'm talking about large projects, we all know about polymorphism,
higher-order function, etc.)
Wojciech Moczydlowski, Jr
From princepawn@earthlink.net Wed May 2 02:28:28 2001
Date: Tue, 1 May 2001 18:28:28 -0700
From: Terrence Brannon princepawn@earthlink.net
Subject: Dimensional analysis with fundeps (some physics comments)
Excuse me for playing referee here, as I am looking at this discussion
from a very abstract level, as I am a beginning functional-logic
programmer. But my question (primarily for Fergus) is: "has Ashley
shown in her example a means of automating a toUnits imperative in
Haskell that must be done explicitly in Mercury?" If so, then we have
the start of the Haskell-Mercury comparison document which shows an
advantage of Haskell over Mercury.
> I'm cross-posting this to the Libraries list...
>
> At 2001-04-10 18:02, Fergus Henderson wrote:
>
> >Still, the need to insert explicit `toUnits' is
> >annoying, and it would be nice to have a system where every number was
> >already a dimensionless unit.
>
> That's easy:
>
> --
> type Unit rep = Dimensioned Zero Zero Zero rep;
>
> instance (Real a) => Real (Unit a) where
> {
> -- put instances here
> };
> --
>
> Of course you'll have to appropriately declare superclasses of Real, such
> as Num, Ord, Show, Eq etc.
>
> --
> Ashley Yakeley, Seattle WA
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
From andrew@andrewcooke.free-online.co.uk Wed May 2 22:44:12 2001
Date: Wed, 2 May 2001 22:44:12 +0100
From: andrew@andrewcooke.free-online.co.uk andrew@andrewcooke.free-online.co.uk
Subject: Interesting: "Lisp as a competitive advantage"
Two things in that interested me. First, the comment that "runtime
typing" is becoming more popular - did he mean strong rather than weak
typing, or dynamic rather than static?
Second, more interestingly, I was surprised at his emphasis on macros.
Having read his (excellent) On Lisp maybe I shouldn't have been (since
that is largely about macros), but anyway, I think it's interesting
because it's one of the big differences between Lisp and the
statically typed languages (STLs).
I have used neither Lisp nor Haskell (or ML) long enough to make a
sound judgementon this, so I'd like to hear other views. My first
thought is that higher order functions are easier to manage in STLs
and might provide some compensation. Also, I'm aware that (limited?)
code manipulation is possible in some STLs (there's an ML library
whose name I've forgotten). How do these compare?
I realise I could get more response by a usenet post to c.l.f and
c.l.l, but I'm hoping there might be more light here. Apologies if
it's too off-topic.
Thanks,
Andrew
On Tue, May 01, 2001 at 08:58:07PM -0400, Sengan wrote:
> http://www.paulgraham.com/paulgraham/avg.html
>
> I wonder how Haskell compares in this regard.
> Any comment from Haskell startups? (eg Galois Connections)
>
> Sengan
>
>
> _______________________________________________
> Haskell mailing list
> Haskell@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell
>
--
http://www.andrewcooke.free-online.co.uk/index.html
From bernardp@cli.di.unipi.it Thu May 3 08:40:02 2001
Date: Thu, 3 May 2001 09:40:02 +0200 (MET DST)
From: Pierpaolo BERNARDI bernardp@cli.di.unipi.it
Subject: Interesting: "Lisp as a competitive advantage"
On Wed, 2 May 2001 andrew@andrewcooke.free-online.co.uk wrote:
> Second, more interestingly, I was surprised at his emphasis on macros.
> Having read his (excellent) On Lisp maybe I shouldn't have been (since
> that is largely about macros), but anyway, I think it's interesting
> because it's one of the big differences between Lisp and the
> statically typed languages (STLs).
Here's a lisper's opinion:
I think that the importance of macros is overrated in the lisp community.
Macros are great if all you have is a traditional, first order language,
they are not so essential if you have HOFs. In Haskell there are better
ways than macros to solve the problems macros solve in Lisp.
The real productivity booster of Lisp is its syntax. The fact that Lisp
syntax makes easy to implement a macro system is a secondary benefit.
And syntax is, IMHO, the point that more modern functional languages get
wrong (yes, I know that this is controversial, and many non-lispers have
strong objections to this statement. No need to remember me).
Pierpaolo (ducking)
From nr@eecs.harvard.edu Thu May 3 15:16:37 2001
Date: Thu, 03 May 2001 10:16:37 -0400
From: Norman Ramsey nr@eecs.harvard.edu
Subject: Interesting: "Lisp as a competitive advantage"
> http://www.paulgraham.com/paulgraham/avg.html
>
> I wonder how Haskell compares in this regard.
I loved Graham's characterization of the hierarchy of power in
programming languages:
- Languages less powerful than the one you understand look impoverished
- Languages more powerful than the one you understand look weird
When I compare Lisp and Haskell, the big question in my mind is this:
is lazy evaluation sufficient to make up for the lack of macros?
I would love to hear from a real Lisp macro hacker who has also done
lazy functional progrmaming.
Norman
From erik@meijcrosoft.com Thu May 3 17:08:59 2001
Date: Thu, 3 May 2001 09:08:59 -0700
From: Erik Meijer erik@meijcrosoft.com
Subject: Interesting: "Lisp as a competitive advantage"
----- Original Message -----
From: "Norman Ramsey" <nr@eecs.harvard.edu>
To: <haskell-cafe@haskell.org>
Sent: Thursday, May 03, 2001 7:16 AM
Subject: Re: Interesting: "Lisp as a competitive advantage"
> > http://www.paulgraham.com/paulgraham/avg.html
> >
> > I wonder how Haskell compares in this regard.
>
> I loved Graham's characterization of the hierarchy of power in
> programming languages:
>
> - Languages less powerful than the one you understand look impoverished
> - Languages more powerful than the one you understand look weird
Same for me; although you should not fall into the trap of reversing it, ie
if the language looks weird is is more powerful!
> When I compare Lisp and Haskell, the big question in my mind is this:
> is lazy evaluation sufficient to make up for the lack of macros?
Don't you get dynamic scoping as well with macros?
> I would love to hear from a real Lisp macro hacker who has also done
> lazy functional progrmaming.
>
>
> Norman
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
From Alan@LCS.MIT.EDU Thu May 3 20:14:07 2001
Date: Thu, 3 May 2001 15:14:07 -0400 (EDT)
From: Alan Bawden Alan@LCS.MIT.EDU
Subject: Haskell-Cafe digest, Vol 1 #122 - 3 msgs
Subject: Re: Interesting: "Lisp as a competitive advantage"
Date: Thu, 03 May 2001 10:16:37 -0400
From: Norman Ramsey <nr@eecs.harvard.edu>
> http://www.paulgraham.com/paulgraham/avg.html
>
> I wonder how Haskell compares in this regard.
I loved Graham's characterization of the hierarchy of power in
programming languages:
- Languages less powerful than the one you understand look impoverished
- Languages more powerful than the one you understand look weird
When I compare Lisp and Haskell, the big question in my mind is this:
is lazy evaluation sufficient to make up for the lack of macros?
I would love to hear from a real Lisp macro hacker who has also done
lazy functional progrmaming.
Norman
--__--__--
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
End of Haskell-Cafe Digest
From Alan@LCS.MIT.EDU Thu May 3 21:25:45 2001
Date: Thu, 3 May 2001 16:25:45 -0400 (EDT)
From: Alan Bawden Alan@LCS.MIT.EDU
Subject: Interesting: "Lisp as a competitive advantage"
(Drat. Sorry for the duplicate message. I just learned a new Emacs
keystroke by accident... Ever notice how you never make a mistakes like
that unless the audience is very large?)
Subject: Re: Interesting: "Lisp as a competitive advantage"
Date: Thu, 03 May 2001 10:16:37 -0400
From: Norman Ramsey <nr@eecs.harvard.edu>
> http://www.paulgraham.com/paulgraham/avg.html
...
When I compare Lisp and Haskell, the big question in my mind is this:
is lazy evaluation sufficient to make up for the lack of macros?
I would love to hear from a real Lisp macro hacker who has also done
lazy functional progrmaming.
The answer is: "almost". Simply having higher order functions eliminates a
lot of the need to macros. Common Lisp programmers could probably use a
lot fewer macros than they do in practice. Lazy evaluation eliminates
the need for another pile of macros. But there are still things you
need macros for.
Here's a macro I use in my Scheme code all the time. I write:
(assert (< x 3))
Which macro expands into:
(if (not (< x 3))
(assertion-failed '(< x 3)))
Where `assertion-failed' is a procedure that generates an appropriate error
message. The problem being solved here is getting the asserted expression
into that error message. I don't see how higher order functions or lazy
evaluation could be used to write an `assert' that behaves like this.
From dpt@math.harvard.edu Fri May 4 00:01:08 2001
Date: Thu, 3 May 2001 19:01:08 -0400
From: Dylan Thurston dpt@math.harvard.edu
Subject: Interesting: "Lisp as a competitive advantage"
On Thu, May 03, 2001 at 04:25:45PM -0400, Alan Bawden wrote:
> Here's a macro I use in my Scheme code all the time. I write:
>
> (assert (< x 3))
>
> Which macro expands into:
>
> (if (not (< x 3))
> (assertion-failed '(< x 3)))
>
> Where `assertion-failed' is a procedure that generates an appropriate error
> message. The problem being solved here is getting the asserted expression
> into that error message. I don't see how higher order functions or lazy
> evaluation could be used to write an `assert' that behaves like this.
This is a good example, which cannot be implemented in
Haskell. "Exception.assert" is built in to the ghc compiler, rather than
being defined within the language. On the other hand, the built in
function gives you the source file and line number rather than the literal
expression; the macro can't do the former.
--Dylan Thurston
dpt@math.harvard.edu
From dankna@brain.mics.net Fri May 4 00:09:01 2001
Date: Thu, 3 May 2001 18:09:01 -0500 (EST)
From: Dan Knapp dankna@brain.mics.net
Subject: Interesting: "Lisp as a competitive advantage"
> > (if (not (< x 3))
> > (assertion-failed '(< x 3)))
>
> This is a good example, which cannot be implemented in
> Haskell. "Exception.assert" is built in to the ghc compiler, rather than
> being defined within the language. On the other hand, the built in
> function gives you the source file and line number rather than the literal
> expression; the macro can't do the former.
Yeah, it's a good example, but are there any other uses for such quoting?
If not, then implementing it as a builtin is perfectly adequate. (Not
trying to pick on Lisp; Lisp is great. Just hoping for more examples.)
| Dan Knapp, Knight of the Random Seed
| http://brain.mics.net/~dankna/
| ONES WHO DOES NOT HAVE TRIFORCE CAN'T GO IN.
From tim@galconn.com Fri May 4 01:09:25 2001
Date: Thu, 03 May 2001 17:09:25 -0700
From: Tim Sauerwein tim@galconn.com
Subject: Interesting: "Lisp as a competitive advantage"
Norman Ramsey wrote:
> I would love to hear from a real Lisp macro hacker who has also done
> lazy functional progrmaming.
I am such a person.
Lisp macros are a way to extend the Lisp compiler. Dylan's example
shows why this reflective power is sometimes useful. Here is another
example. I once wrote a macro to help express pattern-matching rules.
In these rules, variables that began with a question mark were treated
specially.
Having learned Haskell, I am not tempted to go back to Lisp. Yet I
occasionally wish for some sort of reflective syntactic extension.
- Tim Sauerwein
From elf@sandburst.com Fri May 4 02:08:09 2001
Date: Thu, 3 May 2001 21:08:09 -0400
From: Mieszko Lis elf@sandburst.com
Subject: Interesting: "Lisp as a competitive advantage"
Tim Sauerwein wrote:
> I once wrote a macro to help express pattern-matching rules.
> In these rules, variables that began with a question mark were treated
> specially.
David Gifford's Programming Languages class at MIT uses Scheme+, a variant
of MIT Scheme with datatypes and pattern matching. These extensions are
implemented as macros. (http://tesla.lcs.mit.edu/6821)
But of course Haskell has those already :)
-- Mieszko
From uk1o@rz.uni-karlsruhe.de Fri May 4 09:00:40 2001
Date: Fri, 4 May 2001 10:00:40 +0200
From: Hannah Schroeter uk1o@rz.uni-karlsruhe.de
Subject: Interesting: "Lisp as a competitive advantage"
Hello!
On Thu, May 03, 2001 at 06:09:01PM -0500, Dan Knapp wrote:
> [...]
> Yeah, it's a good example, but are there any other uses for such quoting?
> If not, then implementing it as a builtin is perfectly adequate. (Not
> trying to pick on Lisp; Lisp is great. Just hoping for more examples.)
IMHO you can do all the things you'd do with separate preprocessing
steps for other languages with Lisp macros, inclusing scanner/parser
generating for some example. Or you could do the analogous thing
to camlp4 in Lisp with Lisp's own standard features (reader macros
+ normal macros).
You can e.g. also emulate Emacs Lisp in Common Lisp by slightly
hacking up the readtable and defining a few macros and functions
into a separate package. That's quite easy, in fact, the more complicated
part would be offering all those primitive functions of Emacs Lisp,
but if you had this, you could compile all those Emacs Lisp packages
into fast code. Imagine GNUs *not* crawling like a snail on a
Pentium 200 *g*
Kind regards,
Hannah.
From karczma@info.unicaen.fr Fri May 4 11:57:29 2001
Date: Fri, 04 May 2001 12:57:29 +0200
From: Jerzy Karczmarczuk karczma@info.unicaen.fr
Subject: Macros (Was: Interesting: "Lisp as a competitive advantage")
Discussion about macros, Lisp, laziness etc. Too many people to cite.
Alan Bawden uses macros to write assertions, and Dylan Thurston comments:
...
> > (assert (< x 3))
> >
> > Which macro expands into:
> >
> > (if (not (< x 3))
> > (assertion-failed '(< x 3)))
> >
> > Where `assertion-failed' is a procedure that generates an appropriate error
> > message. The problem being solved here is getting the asserted expression
> > into that error message. I don't see how higher order functions or lazy
> > evaluation could be used to write an `assert' that behaves like this.
>
> This is a good example, which cannot be implemented in
> Haskell. "Exception.assert" is built in to the ghc compiler, rather than
> being defined within the language. On the other hand, the built in
> function gives you the source file and line number rather than the literal
> expression; the macro can't do the former.
>
> --Dylan Thurston
In general this is not true, look at the macro preprocessing in C. If your
parser is kind enough to yield to the user some pragmatic information about
the read text, say, __LINE etc., you can code that kind of control with
macros as well.
Macros in Scheme are used to unfold n-ary control structures such as COND
into a hierarchy of IFs, etc. Nothing (in principle) to do with laziness
or HO functions. They are used also to define object-oriented layers in
Scheme or Lisp. I used them to emulate curryfied functions in Scheme.
I think that they are less than popular nowadays because they are dangerous,
badly structured, difficult to write "hygienically". Somebody (Erik Meijer?)
asked: "Don't you get dynamic scoping as well with macros?" Well, what is
dynamic here? Surely this is far from "fluid" bindings, this is a good
way to produce name trapping and other diseases.
In Clean there are macros. They are rather infrequently used...
In C++ a whole zone of macro/preprocessor coding began to disappear with
the arrival of inlining, templates, etc.
I think that macros belong to *low-level* languages. Languages where you
feel under the parsing surface the underlying virtual machine. You can
do fabulous things with. My favourite example is the language BALM, many
years before ML, Haskell etc., there was a functional, Lisp-like language
with a more classical, Algol-like syntax, with infix operators, etc.
The language worked on CDC mainframes, under SCOPE/NOS. Its processor was
written in assembler (Compass). But you should have a look on it imple-
mentation! Yes, assembler, nothing more. But this assembler was so macro-
oriented, and so incredibly powerful, that the instructions looked like
Lisp. With recursivity, parentheses, LET forms which allocated registers,
and other completely deliciously crazy constructs. In fact, the authors
used macros to implement the entire Lisp machine, used to process BALM
programs. //Side remark: don't ask me where to find BALM. I tried, I failed.
If *YOU* find it, let me know//
Another place where macros have been used as main horses was the MAINBOL
implementation of Snobol4. But when people started to implement Spitbol
etc. variants of Snobol4, they decided to use more structured, higher-level
approach (there was even an another, portable assembler with higher-level
instructions "embedded"; these avoided the usage of macros).
Jerzy Karczmarczuk
Caen, France
From Keith.Wansbrough@cl.cam.ac.uk Fri May 4 16:52:14 2001
Date: Fri, 04 May 2001 16:52:14 +0100
From: Keith Wansbrough Keith.Wansbrough@cl.cam.ac.uk
Subject: Macros (Was: Interesting: "Lisp as a competitive
advantage")
Jerzy Karczmarczuk <karczma@info.unicaen.fr> writes:
> Macros in Scheme are used to unfold n-ary control structures such as COND
> into a hierarchy of IFs, etc. Nothing (in principle) to do with laziness
> or HO functions.
Isn't this exactly the reason that macros are less necessary in lazy languages?
In Haskell you can write
myIf True x y = x
myIf False x y = y
and then a function like
recip x = myIf (abs x < eps) 0 (1 / x)
works as expected. In Scheme,
(define myIf
(lambda (b x y)
(if b x y)))
does *not* have the desired behaviour! One can only write myIf using
macros, or by explicitly delaying the arguments.
--KW 8-)
From qrczak@knm.org.pl Fri May 4 17:05:12 2001
Date: 4 May 2001 16:05:12 GMT
From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl
Subject: Macros (Was: Interesting: "Lisp as a competitive advantage")
Fri, 04 May 2001 12:57:29 +0200, Jerzy Karczmarczuk <karczma@info.unicaen.fr> pisze:
> In Clean there are macros. They are rather infrequently used...
I think they roughly correspond to inline functions in Haskell.
They are separate in Clean because module interfaces are written
by hand, so the user can include something to be expanded inline in
other modules by making it a macro.
In Haskell module interfaces are generated by the compiler, so they
can contain unfoldings of functions worth inlining without explicit
distinguishing in the source.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From dpt@math.harvard.edu Fri May 4 21:16:29 2001
Date: Fri, 4 May 2001 16:16:29 -0400
From: Dylan Thurston dpt@math.harvard.edu
Subject: Implict parameters and monomorphism
On Fri, May 04, 2001 at 07:56:24PM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> I would like to make pattern and result type signatures one-way
> matching, like in OCaml: a type variable just gives a name to the given
> part of the type, without constraining it any way - especially without
> "negative constraining", i.e. without yielding an error if it will
> be known more than that it's a possibly constrained type variable...
I'm not sure I understand here. One thing that occurred to me reading
your e-mail was that maybe the implicit universal quantification over
type variables is a bad idea, and maybe type variables should, by
default, have pattern matching semantics. Whether or not this is a
good idea abstractly, the way I imagine it, it would make almost all
existing Haskell code invalid, so it can't be what you're proposing.
Are you proposing that variables still be implicitly quantified in
top-level bindings, but that elsewhere they have pattern-matching
semantics?
Best,
Dylan Thurston
From Alan@LCS.MIT.EDU Fri May 4 21:19:06 2001
Date: Fri, 4 May 2001 16:19:06 -0400 (EDT)
From: Alan Bawden Alan@LCS.MIT.EDU
Subject: Macros
Date: Thu, 3 May 2001 18:09:01 -0500 (EST)
From: Dan Knapp <dankna@brain.mics.net>
> > (if (not (< x 3))
> > (assertion-failed '(< x 3)))
>
> This is a good example, which cannot be implemented in
> Haskell. "Exception.assert" is built in to the ghc compiler, rather than
> being defined within the language. On the other hand, the built in
> function gives you the source file and line number rather than the literal
> expression; the macro can't do the former.
Yeah, it's a good example, but are there any other uses for such quoting?
There are a few. But this isn't the -only- reason to still use macros.
We could systematically go through all the macros I've written in the last
few years, and for each one we could figure out what language feature would
be needed in order to make that macro unnecessary. At the end of the
process you would have a larger programming language, but I still wouldn't be
convinced that we had covered all the cases.
A macro facility is like a pair of vise-grips (if you don't know what those
are, see http://www.technogulf.com/ht-vise.htm). You can do a lot of
things with a pair of vise-grips, although usually there's a better tool
for the job -- if you haven't got (say) a pipe-wrench, then a pair of
vise-grips can substitute. Now the more tools you have in your tool box,
the less often you will use your vise-grips. But no matter how bloated
your tool box becomes, you will still want to include a pair of vise-grips
for the unanticipated situation.
I have one problem with my own analogy: If you find yourself using your
vise-grips everyday for some task, you will probably soon go and purchase a
more appropriate tool. But I think that in many circumstances macros do
such a good job that I don't see the need to clutter up the language with
the special-prupose features needed to replace them.
Date: Fri, 04 May 2001 12:57:29 +0200
From: Jerzy Karczmarczuk <karczma@info.unicaen.fr>
...
I think that they are less than popular nowadays because they are dangerous,
badly structured, difficult to write "hygienically"....
Indeed, you can screw up pretty badly with a pair of vise-grips! A friend
of mine used to say that programmers should have to pass some kind of
licensing test before they would be allowed to write Lisp/Scheme macros.
From qrczak@knm.org.pl Fri May 4 22:17:58 2001
Date: 4 May 2001 21:17:58 GMT
From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl
Subject: Implict parameters and monomorphism
Fri, 4 May 2001 16:16:29 -0400, Dylan Thurston <dpt@math.harvard.edu> pisze:
> I'm not sure I understand here. One thing that occurred to me reading
> your e-mail was that maybe the implicit universal quantification over
> type variables is a bad idea, and maybe type variables should, by
> default, have pattern matching semantics.
Only for type signatures on patterns and results. It's a ghc/Hugs
extension. You can write:
f' arr :: ST s (a i e) = do
(marr :: STArray s i e) <- thaw arr
...
These type variables have the same scope as corresponding value
variables. The s,i,e in the type of marr refer to the corresponding
variables from the result of f'. You could bind i,e to new names in
marr, but not s. (Well, now I'm not sure why there is a difference...)
Type variables from the head of a class are also available in the
class scope in ghc.
You use bound type variables in ordinary type signatures on expressions
and let-bound variables in their scope. Unbound type variables in these
places are implicitly qualified by forall, I don't want to change this.
Some people think that type variables used in standard type signatures
(expressions and let-bound variables) should be available in the
appropriate scope. I don't have a strong opinion on that.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From jmaessen@mit.edu Sat May 5 00:31:21 2001
Date: Fri, 4 May 2001 19:31:21 -0400 (EDT)
From: Jan-Willem Maessen jmaessen@mit.edu
Subject: Macros
Alan Bawden <Alan@LCS.MIT.EDU> writes:
> A macro facility is like a pair of vise-grips (if you don't know what
> those are, see http://www.technogulf.com/ht-vise.htm).
I found myself laughing heartily at this apt analogy. I have heard
vice grips described as "the wrong tool for every job." (My own
experience with vice grips backs this up).
That being said, there are a number of things one might want out of a
macro facility, and I think they should be carefully distinguished:
1) The ability to name expressions without evaluating them, e.g. to
cook up a facsimile of laziness.
2) The ability to parrot source code (and maybe source position) back
at the user, e.g. Alan's assert macro, or its C equivalent.
3) The ability to create new binding constructs.
4) The ability to create new declaration constructs.
(1) is pretty well covered by lazy evaluation.
For (2), I wonder if a clever set of compiler-supplied implicit
parameters might do the trick---after all, "the position of expression
e" and "the source code of expression e" are dynamic notions that
could be carefully defined.
(3) is trickier. Contrast monadic code before and after "do" notation
was introduced. Haskell made it possible---not even very hard---to do
monadic binding, but there was a good deal of ugly syntactic noise.
The introduction of "do" notation eliminated that noise. My instinct
is that this isn't so easy for things that can't be shoehorned into a
monad. For example, I use the Utrecht attribute grammar tool, and
have trouble imagining how grammars could be coded in pure Haskell
while preserving nice naming properties.
(4) is harder still. Polytypic classes are a huge step in the right
direction. What I most long for, though, is the ability to synthesize
new types and new classes---not just simple instance declarations.
As you can probably guess, I think (3) and (4) are the most profitable
avenues of exploration. And I'm pretty sure I _don't_ want syntax
macros for these. I'm still waiting to be convinced what I do want.
-Jan-Willem Maessen
Eager Haskell project
jmaessen@mit.edu
From ru@river.org Sat May 5 12:44:15 2001
Date: Sat, 5 May 2001 04:44:15 -0700 (PDT)
From: Richard ru@river.org
Subject: Macros
Norman Ramsey writes:
>When I compare Lisp and Haskell, the big question in my mind is this:
>is lazy evaluation sufficient to make up for the lack of macros?
it might make sense for Haskell to have a facility that makes it
possible for the programmer to define new bits of syntactic sugar
without changing the compiler.
eg, I recently wanted
case2 foo of ...
as sugar for
foo >>= \output->
case output of ...
if you want to call such an easy-sugar-making facility a macro
facility, fine by me. personally, I wouldnt bother with such a
facility.
aside from simple macros that are just sugar, macros go against the
Haskell philosophy, imo, because macros do not obey as many formal laws
as functions do and because a macro is essentially a new language
construct. rather than building what is essentially a language with
dozens and dozens of constructs, the Haskell way is to re-use the 3
constructs of lambda calculus over and over again (with enough sugar to
keep things human-readable).
From fjh@cs.mu.oz.au Sat May 5 15:53:24 2001
Date: Sun, 6 May 2001 00:53:24 +1000
From: Fergus Henderson fjh@cs.mu.oz.au
Subject: Macros (Was: Interesting: "Lisp as a competitive advantage")
On 04-May-2001, Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> wrote:
> Jerzy Karczmarczuk <karczma@info.unicaen.fr> pisze:
>
> > In Clean there are macros. They are rather infrequently used...
>
> I think they roughly correspond to inline functions in Haskell.
>
> They are separate in Clean because module interfaces are written
> by hand, so the user can include something to be expanded inline in
> other modules by making it a macro.
>
> In Haskell module interfaces are generated by the compiler, so they
> can contain unfoldings of functions worth inlining without explicit
> distinguishing in the source.
I don't think that Clean's module syntax is the reason.
(Or if it is the reason, then it is not a _good_ reason.)
After all, compilers for other languages where module interfaces
are explicitly written by the programmer, e.g. Ada and Mercury, are
still capable of performing intermodule inlining and other intermodule
optimizations if requested.
My guess is that the reason for having macros as a separate construct
is that there is a difference in operational semantics, specifically
with respect to lazyness, between macros and variable bindings.
However, this is just a guess; I don't know Clean very well.
--
Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
From Alan@LCS.MIT.EDU Sun May 6 11:15:26 2001
Date: Sun, 6 May 2001 06:15:26 -0400 (EDT)
From: Alan Bawden Alan@LCS.MIT.EDU
Subject: Macros
Date: Fri, 4 May 2001 19:31:21 -0400 (EDT)
From: Jan-Willem Maessen <earwig@abp.lcs.mit.edu>
Alan Bawden <Alan@LCS.MIT.EDU> writes:
> A macro facility is like a pair of vise-grips (if you don't know what
> those are, see http://www.technogulf.com/ht-vise.htm).
I found myself laughing heartily at this apt analogy. I have heard
vice grips described as "the wrong tool for every job." (My own
experience with vice grips backs this up).
I considered including that well-known quip in the paragraph where I tried
to make it clear that I was -not- saying the same thing about macros.
(While looking for a good online picture of vise-grips I came across a
number of vise-grip horror stories -- the best was the guy who replaced the
steering wheel in his car with a pair of vise-grips! But I digress...)
So just to reiterate: the property of vise-grips I find analogous to a
macro facility is that the more -other- tools you have available, the less
often you need -this- one, nevertheless you still want this tool in your
tool-box.
That being said, there are a number of things one might want out of a
macro facility, and I think they should be carefully distinguished:
1) The ability to name expressions without evaluating them, e.g. to
cook up a facsimile of laziness.
2) The ability to parrot source code (and maybe source position) back
at the user, e.g. Alan's assert macro, or its C equivalent.
3) The ability to create new binding constructs.
4) The ability to create new declaration constructs.
I like this list. I'd love to see nice elegant programming language
features for doing all of these things. Not only would I have to write
fewer macros, but I'd probably be able to do many amazing -new- things.
(Lazyness, for example, doesn't just eliminate the need for a lot of
macros, it allows many new things that are beyond the reach of mere
macrology!)
I suspect, however, that even with everything on this list checked off, I'd
still want macros. Because I doubt that this list is exhaustive. The
example I pulled out of my hat led you to put item #2 on this list, but I
wonder if you would have thought of #2 if you didn't have my example before
you? If I had picked another example, I suspect this list would have
looked different.
- Alan
From qrczak@knm.org.pl Sun May 6 12:30:03 2001
Date: 6 May 2001 11:30:03 GMT
From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl
Subject: Macros
Sat, 5 May 2001 04:44:15 -0700 (PDT), Richard <ru@river.org> pisze:
> eg, I recently wanted
>
> case2 foo of ...
>
> as sugar for
>
> foo >>= \output->
> case output of ...
Yes, often miss OCaml's 'function' and SML's 'fn' syntax which allow
dispatching without inventing a temporary name for the argument nor
the function.
Today I ran across exactly your case. In non-pure languages you would
just write 'case foo of'. I would be happy with just 'function':
get >>= function
... -> ...
... -> ...
I wonder if these parts of Haskell's syntax will stay forever or
there is a chance of some more syntactic sugar.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From chak@cse.unsw.edu.au Mon May 7 06:01:47 2001
Date: Mon, 07 May 2001 15:01:47 +1000
From: Manuel M. T. Chakravarty chak@cse.unsw.edu.au
Subject: Functional programming in Python
Two quite interesting articles about FP in Python are over
at IBM developerWorks:
http://www-106.ibm.com/developerworks/library/l-prog.html
http://www-106.ibm.com/developerworks/library/l-prog2.html
Two IMHO interesting things to note are the following:
* In Part 1, at the start, there is a bullet list of what
the author regards as FP "features". I found the
following interesting about this list:
- There is no mention of the emphasis placed on strong
typing in many modern functional languages.
- The author makes it sound as if FP can't handle
imperative features, whereas I would say that this is a
problem of the past and wasn't an issue in many FP
languages (Lisp, ML, ...) in the first place.
The opinion of the author is not really suprising, but I
think, it indicates a problem in how FP presents itself
to the rest of the world.
* In Part 2, the author writes at the end:
I have found it much easier to get a grasp of functional
programming in the language Haskell than in Lisp/Scheme
(even though the latter is probably more widely used, if
only in Emacs). Other Python programmers might
similarly have an easier time without quite so many
parentheses and prefix (Polish) operators.
I think, this is interesting, because both Lisp and Python
are dynammically typed. So, I would have expected the
strong type system to be more of a hurdle than Lisp's
syntax (or lack thereof).
Cheers,
Manuel
From karczma@info.unicaen.fr Mon May 7 11:38:31 2001
Date: Mon, 07 May 2001 12:38:31 +0200
From: Jerzy Karczmarczuk karczma@info.unicaen.fr
Subject: Macros (Was: Interesting: "Lisp as a competitive advantage")
Keith Wansbrough quotes :
>
> Jerzy Karczmarczuk <karczma@info.unicaen.fr> writes:
>
> > Macros in Scheme are used to unfold n-ary control structures such as COND
> > into a hierarchy of IFs, etc. Nothing (in principle) to do with laziness
> > or HO functions.
>
> Isn't this exactly the reason that macros are less necessary in lazy languages?
> In Haskell you can write
> myIf True x y = x
> myIf False x y = y
>
> and then a function like
> recip x = myIf (abs x < eps) 0 (1 / x)
>
> works as expected. In Scheme,
>
> (define myIf
> (lambda (b x y)
> (if b x y)))
> does *not* have the desired behaviour! One can only write myIf using
> macros, or by explicitly delaying the arguments.
==========================
Well, my point was very different *here*. Lazy functions *may* play the role
of control structures (as your myIf in Haskell). In Scheme, with macros or
without macros you CANNOT implement "if". But you can write
(cond (cond1 seq1)
(cond2 seq2 etc)
...
(condN und so weiter) )
translated into
(if cond1 seq1
(if cond2 (begin seq2 etc)
(if ... )))
In the same way the LET* constructs are developed. And a multifunction
DEFINE.
So, here it is not the question of laziness, but of syntactic extensions.
(Of course, one can "cheat" implementing a user-defined IF as you proposed
above, delaying the last two arguments of this ternary operator, but the
decision which one will be evaluated, will pass through the "real" IF anyway.)
However, if you have already your primitive control structures (i.e. your
underlying machine), you can play with macros to implement lazy continuations,
backtracking, etc.Usually this is awkward, and less efficient than if done
at a more primitive level, where you can use easily unboxed data stored on
true stacks, execute real, fast branching, etc.
I am not sure about this vise-grip analogy. For me macros are façades, a way
to present a surface of the programm. The real work, where you need wrenches,
pipes, dynamite and Bible, all that is hidden behind.
Jerzy Karczmarczuk
Caen, France
From sperber@informatik.uni-tuebingen.de Mon May 7 09:53:44 2001
Date: 07 May 2001 10:53:44 +0200
From: Michael Sperber [Mr. Preprocessor] sperber@informatik.uni-tuebingen.de
Subject: Macros (Was: Interesting: "Lisp as a competitive advantage")
>>>>> "Keith" == Keith Wansbrough <Keith.Wansbrough@cl.cam.ac.uk> writes:
Keith> Jerzy Karczmarczuk <karczma@info.unicaen.fr> writes:
>> Macros in Scheme are used to unfold n-ary control structures such as COND
>> into a hierarchy of IFs, etc. Nothing (in principle) to do with laziness
>> or HO functions.
Keith> Isn't this exactly the reason that macros are less necessary in
Keith> lazy languages?
Keith> In Haskell you can write
Keith> myIf True x y = x
Keith> myIf False x y = y
Sure, but you're relying on pattern matching to implement the
semantics of myIf, which already is a generalized conditional. In
Scheme, where pattern matching is not primitive, you can get it
through a macro. The same holds for things like DO notation or list
comprehensions, where apparently lazy evaluation by itself doesn't
help implementing convenient syntax atop a more primitive underlying
notion.
--
Cheers =8-} Mike
Friede, Völkerverständigung und überhaupt blabla
From ronny@cs.kun.nl Mon May 7 10:14:22 2001
Date: Mon, 07 May 2001 11:14:22 +0200
From: Ronny Wichers Schreur ronny@cs.kun.nl
Subject: Macros
Marcin 'Qrczak' Kowalczyk schrijft:
>I think [Clean macros] roughly correspond to inline functions
>in Haskell.
That's right. I think the most important difference is that Clean
macros can also be used in patterns (if they don't have a lower
case name or contain local functions).
The INLINE pragma for GHC is advisory, macros in Clean will always
be substituted.
>They are separate in Clean because module interfaces are written
>by hand, so the user can include something to be expanded inline
>in other modules by making it a macro.
>In Haskell module interfaces are generated by the compiler, so
>they can contain unfoldings of functions worth inlining without
>explicit distinguishing in the source.
Fergus Henderson replies:
>I don't think that Clean's module syntax is the reason. (Or if it
>is the reason, then it is not a _good_ reason.) [...]
You're right, having hand written interfaces doesn't preclude
compiler written interfaces (or optimisation files). Let's call it
a pragmatic reason: Clean macros are there because we don't do any
cross-module optimisations and we do want some form of inlining.
Cheers,
Ronny Wichers Schreur
From simonmar@microsoft.com Mon May 7 14:14:51 2001
Date: Mon, 7 May 2001 14:14:51 +0100
From: Simon Marlow simonmar@microsoft.com
Subject: Macros
> Today I ran across exactly your case. In non-pure languages you would
> just write 'case foo of'. I would be happy with just 'function':
>=20
> get >>=3D function
> ... -> ...
> ... -> ...
Well, simply extending the Haskell syntax to allow
\ p11 .. p1n -> e1
..
pm1 .. pmn -> em
(with appropriate layout) should be ok, but I haven't tried it. Guarded
right-hand-sides could be allowed too.
Cheers,
Simon
From rossberg@ps.uni-sb.de Mon May 7 14:34:49 2001
Date: Mon, 07 May 2001 15:34:49 +0200
From: Andreas Rossberg rossberg@ps.uni-sb.de
Subject: Macros
Simon Marlow wrote:
>
> Well, simply extending the Haskell syntax to allow
>
> \ p11 .. p1n -> e1
> ..
> pm1 .. pmn -> em
>
> (with appropriate layout) should be ok, but I haven't tried it. Guarded
> right-hand-sides could be allowed too.
Introducing layout after \ will break a lot of programs. For example,
consider the way >>= is often formatted:
f >>= \x ->
g >>= \y ->
...
I guess that's why Marcin suggested using a new keyword.
- Andreas
--
Andreas Rossberg, rossberg@ps.uni-sb.de
"Computer games don't affect kids.
If Pac Man affected us as kids, we would all be running around in
darkened rooms, munching pills, and listening to repetitive music."
From simonmar@microsoft.com Mon May 7 14:42:20 2001
Date: Mon, 7 May 2001 14:42:20 +0100
From: Simon Marlow simonmar@microsoft.com
Subject: Macros
> Simon Marlow wrote:
> >=20
> > Well, simply extending the Haskell syntax to allow
> >=20
> > \ p11 .. p1n -> e1
> > ..
> > pm1 .. pmn -> em
> >=20
> > (with appropriate layout) should be ok, but I haven't tried=20
> it. Guarded
> > right-hand-sides could be allowed too.
>=20
> Introducing layout after \ will break a lot of programs. For example,
> consider the way >>=3D is often formatted:
>=20
> f >>=3D \x ->
> g >>=3D \y ->
> ...
>=20
> I guess that's why Marcin suggested using a new keyword.
Ah yes, I forgot that lambda expressions are often used like that
(actually, I think that this use of the syntax is horrible, but that's
just MHO).
Cheers,
Simon
From mg169780@students.mimuw.edu.pl Mon May 7 16:48:47 2001
Date: Mon, 7 May 2001 17:48:47 +0200 (CEST)
From: Michal Gajda mg169780@students.mimuw.edu.pl
Subject: Macros[by implementor of toy compiler]
On Fri, 4 May 2001, Alan Bawden wrote:
> (...)
> But I think that in many circumstances macros do
> such a good job that I don't see the need to clutter up the language with
> the special-prupose features needed to replace them.
> (...)
I'm currently making fun by writing compiler from eager lambda calculus
with hygienic macros to it's desugared form(when using parser combinators,
I estimate 2k lines of ML code), so I take freedom to share my
experiences. Working(simple and without typechecking, so only the
syntactic correctness is checked) version expected at the end of the
month.
* Introduction of general hygienic macro's as you propose, forces us to
cope with following problems:
1. Full typechecking of macros(in place of definition) seems to need
second-rank polymorphism. (Decidable, but harder to implement) Of course
you can delay typechecking until you expand all macros, but then
error-messages become unreadable.
[in Lisp it is non-issue]
2. Macros make the parsed grammar dynamic. Usually compiler has hard-coded
parser generated by LALR parser generator(like Happy or Yacc) compiled in.
Introducing each macro like you proposed would need(I think) generating
new parser(at least for the fragment of the grammar).
[In Lisp we just separate form with macro named head, and then apply
macro's semantic function to macro's body at COMPILE TIME. So:
a) the basic syntax is unchanged
b) we need to evaluate expressions at compile-time]
* On the other hand it yields following advantages:
1. Powerful enough to implement do-notation(for sure) and (probably) all
or most other to-core Haskell translations.
2. Lifts (unsurprisingly) the need to use cpp preprocessor when handling
compatibility issues.
Hope it helps :-) [and feel free to mail me if you are interested]
Michal Gajda
korek@icm.edu.pl
From Alan@LCS.MIT.EDU Tue May 8 08:15:06 2001
Date: Tue, 8 May 2001 03:15:06 -0400 (EDT)
From: Alan Bawden Alan@LCS.MIT.EDU
Subject: Macros[by implementor of toy compiler]
Date: Mon, 7 May 2001 17:48:47 +0200 (CEST)
From: Michal Gajda <mg169780@zodiac.mimuw.edu.pl>
* Introduction of general hygienic macro's as you propose, forces us to
cope with following problems:
1. Full typechecking of macros(in place of definition) seems to need
second-rank polymorphism. (Decidable, but harder to implement) Of course
you can delay typechecking until you expand all macros, but then
error-messages become unreadable.
...
An interesting option is to allow the macro writer to supply his own
type-checker that is then run before macro-expansion. Type errors can then
be presented to the user using the pre-expansion source code.
I can hear you all boggling over the lack of safety in what I have just
proposed. Suppose the macro-writer provides a bogus type-checker? Ack!
Unsafe code! Mid-air collisions! Nuclear melt-downs! Am I nuts!?
Actually no. Just do a separate type-check of the fully expanded code
using only the built-in type-checkers. If this verification check comes up
with a different answer, then one of your macros has a buggy type-checker.
(What you do about finding -that- bug and reporting it to the author of the
macro is another issue...)
A student did a Masters thesis for Olin Shivers and me at MIT last year
trying to work out the kinks in a macro facility that would work
approximately this way. The results were interesting, but not yet ready
for prime time.
From Keith.Wansbrough@cl.cam.ac.uk Tue May 8 16:02:07 2001
Date: Tue, 08 May 2001 16:02:07 +0100
From: Keith Wansbrough Keith.Wansbrough@cl.cam.ac.uk
Subject: Macros[by implementor of toy compiler]
> 2. Macros make the parsed grammar dynamic. Usually compiler has hard-coded
> parser generated by LALR parser generator(like Happy or Yacc) compiled in.
> Introducing each macro like you proposed would need(I think) generating
> new parser(at least for the fragment of the grammar).
Dylan has macros and a syntax more interesting than LISP's, so perhaps
it would be worth looking at how they handle it. I forget now, but I
don't think it was too difficult.
I also considered some of the issues we're discussing in an
unpublished paper that appears on my publications page,
http://www.cl.cam.ac.uk/~kw217/research/papers.html#Wansbrough99:Macros
(Section 8 looks briefly at Dylan-style macro facilities).
--KW 8-)
From brk@jenkon.com Tue May 8 17:53:07 2001
Date: Tue, 8 May 2001 09:53:07 -0700
From: brk@jenkon.com brk@jenkon.com
Subject: Functional programming in Python
Hi Manuel,
It's interesting to me to note the things that were interesting to
you. :-) I'm the author of the Xoltar Toolkit (including functional.py)
mentioned in those articles, and I have to agree with Dr. Mertz - I find
Haskell much more palatable than Lisp or Scheme. Many (most?) Python
programmers also have experience in more typeful languages (typically at
least C, since that's how one writes Python extension modules) so perhaps
that's not as surprising as it might seem.
Type inference (to my mind at least) fits the Python mindset very
well. I think most Python programmers would be glad to have strong typing,
so long as they don't have to press more keys to get it. If you have to
declare all your types up front, it just means more time spent changing type
declarations as the design evolves, but if the compiler can just ensure your
usage is consistent, that's hard to argue with.
As for the difficulty with imperative constructs, I agree it's not
even an issue for many (Dylan, ML, et. al.) languages, but for Haskell it
still is, in my humble opinion. I found the task of writing a simple program
that did a few simple imperative things inordinately difficult. I know about
the 'do' construct, and I understand the difference between >> and >>=. I've
read a book on Haskell, and implemented functional programming support for
Python, but trying to use Haskell to write complete programs still ties my
brain in knots. I see there are people writing complete, non-trivial
programs in Haskell, but I don't see how.
To be sure, I owe Haskell more of my time and I owe it to myself to
overcome this difficulty, but I don't think it's only my difficulty. In the
Haskell book I have, discussion of I/O is delayed until chapter 18, if
memory serves. One thing that might really help Haskell become more popular
is more documentation which presents I/O in chapter 2 or 3. Clearly the
interesting part of a functional language is the beauty of stringing
together all these functions into a single, elegant expression, but an
introductory text would do well to focus on more immediate problems first.
People on this list and others often say that the main body of the program
is almost always imperative in style, but there's little demonstration of
that fact - most examples are of a purely functional nature.
Please understand I mean these comments in the most constructive
sense. I have the highest respect for Haskell and the folks who work with
it.
Bryn
> -----Original Message-----
> From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> Sent: Sunday, May 06, 2001 10:02 PM
> To: haskell-cafe@haskell.org
> Subject: Functional programming in Python
>
> Two quite interesting articles about FP in Python are over
> at IBM developerWorks:
>
> http://www-106.ibm.com/developerworks/library/l-prog.html
> http://www-106.ibm.com/developerworks/library/l-prog2.html
>
> Two IMHO interesting things to note are the following:
>
> * In Part 1, at the start, there is a bullet list of what
> the author regards as FP "features". I found the
> following interesting about this list:
>
> - There is no mention of the emphasis placed on strong
> typing in many modern functional languages.
>
> - The author makes it sound as if FP can't handle
> imperative features, whereas I would say that this is a
> problem of the past and wasn't an issue in many FP
> languages (Lisp, ML, ...) in the first place.
>
> The opinion of the author is not really suprising, but I
> think, it indicates a problem in how FP presents itself
> to the rest of the world.
>
> * In Part 2, the author writes at the end:
>
> I have found it much easier to get a grasp of functional
> programming in the language Haskell than in Lisp/Scheme
> (even though the latter is probably more widely used, if
> only in Emacs). Other Python programmers might
> similarly have an easier time without quite so many
> parentheses and prefix (Polish) operators.
>
> I think, this is interesting, because both Lisp and Python
> are dynammically typed. So, I would have expected the
> strong type system to be more of a hurdle than Lisp's
> syntax (or lack thereof).
>
> Cheers,
> Manuel
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
From erik@meijcrosoft.com Tue May 8 22:54:24 2001
Date: Tue, 8 May 2001 14:54:24 -0700
From: Erik Meijer erik@meijcrosoft.com
Subject: Functional programming in Python
Interestingly enough, I have the same feeling with Python!
> As for the difficulty with imperative constructs, I agree it's not
> even an issue for many (Dylan, ML, et. al.) languages, but for Haskell it
> still is, in my humble opinion. I found the task of writing a simple
program
> that did a few simple imperative things inordinately difficult. I know
about
> the 'do' construct, and I understand the difference between >> and >>=.
I've
> read a book on Haskell, and implemented functional programming support for
> Python, but trying to use Haskell to write complete programs still ties my
> brain in knots. I see there are people writing complete, non-trivial
> programs in Haskell, but I don't see how.
From chak@cse.unsw.edu.au Wed May 9 08:56:39 2001
Date: Wed, 09 May 2001 17:56:39 +1000
From: Manuel M. T. Chakravarty chak@cse.unsw.edu.au
Subject: Functional programming in Python
brk@jenkon.com wrote,
> It's interesting to me to note the things that were interesting to
> you. :-) I'm the author of the Xoltar Toolkit (including functional.py)
> mentioned in those articles
Cool :-)
> and I have to agree with Dr. Mertz - I find
> Haskell much more palatable than Lisp or Scheme. Many (most?) Python
> programmers also have experience in more typeful languages (typically at
> least C, since that's how one writes Python extension modules) so perhaps
> that's not as surprising as it might seem.
Ok, but there are worlds between C's type system and
Haskell's.[1]
> Type inference (to my mind at least) fits the Python mindset very
> well.
So, how about the following conjecture? Types essentially
only articulate properties about a program that a good
programmer would be aware of anyway and would strive to
reinforce in a well-structured program. Such a programmer
might not have many problems with a strongly typed language.
Now, to me, Python has this image of a well designed
scripting language attracting the kind of programmer who
strives for elegance and well-structured programs. Maybe
that is a reason.
> I think most Python programmers would be glad to have strong typing,
> so long as they don't have to press more keys to get it. If you have to
> declare all your types up front, it just means more time spent changing type
> declarations as the design evolves, but if the compiler can just ensure your
> usage is consistent, that's hard to argue with.
Type inference (as opposed to mere type checking) is
certainly a design goal in Haskell.
> As for the difficulty with imperative constructs, I agree it's not
> even an issue for many (Dylan, ML, et. al.) languages, but for Haskell it
> still is, in my humble opinion. I found the task of writing a simple program
> that did a few simple imperative things inordinately difficult. I know about
> the 'do' construct, and I understand the difference between >> and >>=. I've
> read a book on Haskell, and implemented functional programming support for
> Python, but trying to use Haskell to write complete programs still ties my
> brain in knots. I see there are people writing complete, non-trivial
> programs in Haskell, but I don't see how.
>
> To be sure, I owe Haskell more of my time and I owe it to myself to
> overcome this difficulty, but I don't think it's only my difficulty. In the
> Haskell book I have, discussion of I/O is delayed until chapter 18, if
> memory serves. One thing that might really help Haskell become more popular
> is more documentation which presents I/O in chapter 2 or 3. Clearly the
> interesting part of a functional language is the beauty of stringing
> together all these functions into a single, elegant expression, but an
> introductory text would do well to focus on more immediate problems first.
Absolutely. In fact, you have just pointed out one of the
gripes that I have with most Haskell texts and courses. The
shunning of I/O in textbooks is promoting the image of
Haskell as a purely academic exercise. Something which is
not necessary at all, I am teaching an introductory course
with Haskell myself and did I/O in Week 5 out of 14 (these
are students without any previous programming experience).
Moreover, IIRC Paul Hudak's book <http://haskell.org/soe/>
also introduces I/O early.
In other words, I believe that this a problem with the
presentation of Haskell and not with Haskell itself.
Cheers,
Manuel
[1] You might wonder why I am pushing this point. It is
just because the type system seems to be a hurdle for
some people who try Haskell. I am curious to understand
why it is a problem for some and not for others.
From C.T.McBride@durham.ac.uk Thu May 10 13:13:01 2001
Date: Thu, 10 May 2001 13:13:01 +0100 (BST)
From: C T McBride C.T.McBride@durham.ac.uk
Subject: argument permutation and fundeps
Hi
This is a long message, containing a program which makes heavy use of
type classes with functional dependencies, and a query about how the
typechecker treats them. It might be a bit of an effort, but I'd be
grateful for any comment and advice more experienced Haskellers can
spare the time to give me. Er, this ain't no undergrad homework...
I'm a dependently typed programmer and a bit of an old
Lisphead. What's common to both is that it's fairly easy to write
`program reconstruction operations', like the argument permutation
function (of which flip is an instance), specified informally thus:
Given a permutation p on [1..n] and an n-ary function f,
permArg p f x_p(1) .. x_p(n) = f x_1 .. x_n
It's easy in Lisp because you can just compute the relevant
lambda-expression for permArg p f syntactically, then use it as a
program: of course, there's no type system to make sure your p is
really a perm and your f has enough arguments. To give permArg a
precise type, we have to compute an n-ary permutation of the argument
types for an n-ary function space. That is, we need a representation
of permutations p from which to compute
(1) the type of the permutation function
(2) the permutation function itself
This isn't that hard with dependent types, because we can use the same
kinds of data and program as easily at the level of types as we can at the
level of terms.
I'm relatively new to Haskell: what attracted me is that recent extensions
to the language have introduced a model of computation at the type level
via multi-parameter classes with functional dependencies. Although
type-level Haskell programming is separate from, different to, and not
nearly as powerful as term-level Haskell programming, it's still possible
to do some pretty interesting stuff: here's permArg as a Haskell program...
Although we can't compute with types over a data encoding of
permutations, we can use type classes to make `fake' datatypes one
level up. Here's a class representing the natural numbers. Each type
in the class has a single element, which we can use to tell the
typechecker which instance of the class we mean.
> class Nat n
> data O = O
> instance Nat O
> data (Nat n) => S n = S n
> instance (Nat n) => Nat (S n)
Now, for each n in Nat, we need a class Fin n with exactly n instances. A
choice of instance represents the position the first argument of an n-ary
function gets shifted to. We can make Fin (S n) by embedding Fin n with
one type constructor, FS, and chucking in a new type with another, FO.
> class (Nat n) => Fin n x | x -> n
> data (Nat n) => FO n = FO n
> instance (Nat n) => Fin (S n) (FO n)
> data (Nat n,Fin n x) => FS n x = FS n x
> instance (Nat n,Fin n x) => Fin (S n) (FS n x)
The class Factorial n contains n! types---enough to represent the
permutations. It's computed in the traditional way, but this time the
product is cartesian...
> class (Nat n) => Factorial n p | p -> n
> instance Factorial O ()
> instance (Factorial n p,Fin (S n) x) => Factorial (S n) (x,p)
The operation InsertArg n x r s, where x is in Fin (S n), takes an
n-ary function type s and inserts an extra argument type r at
whichever of the (S n) positions is selected by x. The corresponding
function, insertArg permutes a (S n)-ary function in r -> s by
flipping until the first argument has been moved to the nominated
position. FS codes `flip the argument further in and keep going'; FO
codes `stop flipping'.
> class (Nat n,Fin (S n) x) =>
> InsertArg n x r s t | x -> n, x r s -> t, x t -> r s where
> insertArg :: x -> (r -> s) -> t
> instance (Nat n) => InsertArg n (FO n) r s (r -> s) where
> insertArg (FO _) f = f
> instance (Nat n,Fin (S n) x,InsertArg n x r s t) =>
> InsertArg (S n) (FS (S n) x) r (a -> s) (a -> t) where
> insertArg (FS _ x) f a = insertArg x (flip f a)
PermArg simply works its way down the factorial, performing the InsertArg
indicated at each step. permArg is correspondingly built with insertArg.
> class (Nat n,Factorial n p) =>
> PermArg n p s t | p s -> t, p t -> s where
> permArg :: p -> s -> t
> instance PermArg O () t t where
> permArg () t = t
> instance (InsertArg n x r t u,PermArg n p s t) =>
> PermArg (S n) (x,p) (r -> s) u where
> permArg (x,p) f = insertArg x (\r -> permArg p (f r))
This code is accepted by the Feb 2000 version of Hugs, of course with
-98 selected.
Let's look at some examples: the interesting thing is how the typechecker
copes. Here's the instance of permArg which corresponds to flip
Main> permArg (FS (S O) (FO O),(FO O,()))
ERROR: Unresolved overloading
*** Type : (InsertArg (S O) (FS (S O) (FO O)) a b c,
InsertArg O (FO O) d e b,
PermArg O () f e)
=> (a -> d -> f) -> c
*** Expression : permArg (FS (S O) (FO O),(FO O,()))
OK, I wasn't expecting that to work, because I didn't tell it the type
of the function to permute: however, the machine did figure it out. Look
at where it got stuck: I'd have hoped it would compute that e is f, and hence
that b is d -> f, then possibly even that c is d -> a -> f. Isn't that
what the functional dependencies say?
On the other hand, if I tell it the answer, it's fine.
Main> :t permArg (FS (S O) (FO O),(FO O,())) :: (a -> b -> c) -> b -> a -> c
permArg (FS (S O) (FO O),(FO O,())) :: (a -> b -> c) -> b -> a -> c
First, a nice little monomorphic function.
> elemChar :: Char -> [Char] -> Bool
> elemChar = elem
There's no doubt about the type of the input, but this happens:
Main> permArg (FS (S O) (FO O),(FO O,())) elemChar
ERROR: Unresolved overloading
*** Type : (InsertArg (S O) (FS (S O) (FO O)) Char a b,
InsertArg O (FO O) [Char] c a,
PermArg O () Bool c)
=> b
*** Expression : permArg (FS (S O) (FO O),(FO O,())) elemChar
What am I doing wrong? I hope my program says that c is Bool, and so on.
Again, tell it the answer and it checks out:
Main> :t (permArg (FS (S O) (FO O),(FO O,())) elemChar)
:: [Char] -> Char -> Bool
permArg (FS (S O) (FO O),(FO O,())) elemChar :: [Char] -> Char -> Bool
Adding some arguments gives enough information about `the answer' to
get rid of the explicit typing.
Main> permArg (FS (S O) (FO O),(FO O,())) elemChar ['a','b','c'] 'b'
True :: Bool
It's the same story with arity 3...
Main> :t permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,()))) ::
(a -> b -> c -> d) -> (c -> a -> b -> d)
permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,()))) ::
(a -> b -> c -> d) -> c -> a -> b -> d
Main> permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,()))) foldl
ERROR: Unresolved overloading
*** Type : (InsertArg (S (S O)) (FS (S (S O)) (FO (S O))) (a -> b -> a) c d,
InsertArg (S O) (FS (S O) (FO O)) a e c,
InsertArg O (FO O) [b] f e,
PermArg O () a f)
=> d
*** Expression : permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,())))
foldl
Main> :t (permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,()))) ::
(a -> b -> c -> d) -> (c -> a -> b -> d)) foldl
permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,()))) foldl ::
[a] -> (b -> a -> b) -> b -> b
Main> permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,())))
foldl [1,2,3] (+) 0
6 :: Integer
So, am I failing to explain to Hugs why PermArg and InsertArg are programs,
despite the explicit functional dependencies, or is the typechecker just not
running them? It seems to be expanding PermArg's step case ok, but not
executing the base case, leaving InsertArg blocked. Can anyone shed some
light on the operational semantics of programming at the type level?
Having, said all that, I'm really impressed that even this much is possible
in Haskell. It's so nice to be able to write a type that says exactly what I
mean.
Cheers
Conor
From jeff@galconn.com Thu May 10 16:34:02 2001
Date: Thu, 10 May 2001 08:34:02 -0700
From: Jeffrey R. Lewis jeff@galconn.com
Subject: argument permutation and fundeps
C T McBride wrote:
> Hi
>
> This is a long message, containing a program which makes heavy use of
> type classes with functional dependencies, and a query about how the
> typechecker treats them. It might be a bit of an effort, but I'd be
> grateful for any comment and advice more experienced Haskellers can
> spare the time to give me. Er, this ain't no undergrad homework...
Without delving too deeply into your example, it looks like you've bumped into a known bug in Hugs implementation of functional dependencies. You should try GHCI if you can - it doesn't suffer from this bug.
--Jeff
From mpj@cse.ogi.edu Thu May 10 17:02:41 2001
Date: Thu, 10 May 2001 09:02:41 -0700
From: Mark P Jones mpj@cse.ogi.edu
Subject: argument permutation and fundeps
Hi Jeff,
| Without delving too deeply into your example, it looks like=20
| you've bumped into a known bug in Hugs implementation of=20
| functional dependencies. You should try GHCI if you can - it=20
| doesn't suffer from this bug.
Are there any plans to fix the bug in Hugs? (And is there
anywhere that the bug is documented?)
All the best,
Mark
From C T McBride Thu May 10 17:13:34 2001
Date: Thu, 10 May 2001 17:13:34 +0100 (BST)
From: C T McBride C T McBride
Subject: argument permutation and fundeps
> C T McBride wrote:
>
> > Hi
> >
> > This is a long message, containing a program which makes heavy use of
> > type classes with functional dependencies, and a query about how the
> > typechecker treats them. It might be a bit of an effort, but I'd be
> > grateful for any comment and advice more experienced Haskellers can
> > spare the time to give me. Er, this ain't no undergrad homework...
Jeffrey R. Lewis:
>
> Without delving too deeply into your example, it looks like you've bumped
> into a known bug in Hugs implementation of functional dependencies. You
> should try GHCI if you can - it doesn't suffer from this bug.
Thanks for the tip! Our local Haskell supremo has pointed me to a version
I can run, and that has improved the situation... a bit.
Now I get
Perm> permArg (FS (S O) (FO O),(FO O,()))
No instance for `Show ((r -> s -> s) -> s -> r -> s)'
Obviously, there's no Show method, but I was expecting a more general
type. But if I tell it the right answer, it believes me
Perm> permArg (FS (S O) (FO O),(FO O,()))
:: (a -> b -> c) -> b -> a -> c
No instance for `Show ((a -> b -> t) -> b -> a -> t)'
The main thing is that I can permute a function correctly, without needing
an explicit signature:
Perm> permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,())))
foldl
No instance for `Show ([b] -> (t -> b -> t) -> t -> t)'
Still, those too-specific inferred types are disturbing me a little
Perm> permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,())))
No instance for `Show ((r -> s -> s -> s) -> s -> r -> s -> s)'
The above examples show that my code works with the generality it needs
to. Is there some `defaulting' mechanism at work here for the inferred
types making all those s's the same?
But this is definitely progress!
Thanks
Conor
From kort@science.uva.nl Fri May 11 13:19:21 2001
Date: Fri, 11 May 2001 14:19:21 +0200
From: Jan Kort kort@science.uva.nl
Subject: sharing datatypes : best practice ?
"Taesch, Luc" wrote:
>
> do u isolate just the datatype, or a few related with, in a very small file (header like, i would say)
> or some basic accessor function with it ?
>
> isnt it leading to massiv quantities of small files ?
Asuming you have some typed AST with many mutually recursive
datatypes, I would keep them in one big file. This should be
fine if the datatypes are simple (no "deriving" Read and Show
etc.).
For an AST you don't want accessor functions: the datatypes
are the interface. For some datatypes you want to hide
the datatype and provide a function based interface, this
should be in the same file as the datatype.
Usually there is also some kind of asumed hierarchy in
datatypes, e.g. Int < List < FiniteMap, to determine where
functions operating on multiple datatypes should be
placed, but that's the same in OO.
Jan
From khaliff@astercity.net Fri May 11 17:30:27 2001
Date: Fri, 11 May 2001 18:30:27 +0200 (CEST)
From: Wojciech Moczydlowski, Jr khaliff@astercity.net
Subject: Arrays in Haskell, was: Re: Functional programming in Python
On Tue, 8 May 2001, Erik Meijer wrote:
> Interestingly enough, I have the same feeling with Python!
Speaking of problems with Haskell, almost every time I write a larger
program, I'm frustrated with lack of efficient arrays/hashtables in the
standard. I know about ghc (I|U|M)Arrays for arrays and probably there
are hashtables implemented in Edison library, but the program's
portability would be lost and nhc/hugs would protest. I would be very
happy if Haskell developers could settle on a simple, not sophisticated
standard arrays.
I personally would like an interface like:
data Array type_of_objects_stored = ... -- abstract
data MArray a b = ... -- abstract
instance Monad (MArray a)
put :: Int -> a -> Array a -> MArray ()
get :: Array a -> MArray a
runMArray :: Int -> MArray a -> a -- int parameter is a size of used
array.
Even if they were put in IO, I still would not protest. Anything is better
than nothing.
Wojciech Moczydlowski, Jr
From mg169780@students.mimuw.edu.pl Fri May 11 22:48:47 2001
Date: Fri, 11 May 2001 23:48:47 +0200 (CEST)
From: Michal Gajda mg169780@students.mimuw.edu.pl
Subject: Arrays in Haskell
On Fri, 11 May 2001, Wojciech Moczydlowski, Jr wrote:
> data Array type_of_objects_stored = ... -- abstract
> data MArray a b = ... -- abstract
>
> put :: Int -> a -> Array a -> MArray a ()
Probably you meant:
put :: Int -> a -> MArray a ()
> get :: Array a -> MArray a a
get :: Int -> MArray a a
> runMArray :: Int -> MArray a -> a -- int parameter is a size of used
> array.
runMArray :: Int -> a -> MArray a b -> b
[first a is an initializer]
Greetings
Michal Gajda
korek@icm.edu.pl
From khaliff@astercity.net Fri May 11 22:20:12 2001
Date: Fri, 11 May 2001 23:20:12 +0200 (CEST)
From: Wojciech Moczydlowski, Jr khaliff@astercity.net
Subject: Arrays in Haskell
On Fri, 11 May 2001, Michal Gajda wrote:
> On Fri, 11 May 2001, Wojciech Moczydlowski, Jr wrote:
> Probably you meant:
> put :: Int -> a -> MArray a ()
> get :: Int -> MArray a a
> runMArray :: Int -> a -> MArray a b -> b
You're of course right. I should have reread my letter before sending.
> Michal Gajda
Wojciech Moczydlowski, Jr
From ralf@informatik.uni-bonn.de Sun May 13 09:44:19 2001
Date: Sun, 13 May 2001 10:44:19 +0200
From: Ralf Hinze ralf@informatik.uni-bonn.de
Subject: argument permutation and fundeps
Dear Conor,
thanks for posing the `argument permutation' problem. I had several
hours of fun hacking up a solution that works under `ghci' (as Jeff
pointed out Hugs probably suffers from a bug). The solution is
relatively
close to your program, I only simplified the representation of the
`factorial numbers' that select a permutation. The program is
attached below.
Here is a sample interaction (using `ghci -fglasgow-exts PermArg.lhs'):
PermArgs> :t pr
Int -> Bool -> Char -> [Char]
PPermArgs> :t perm (a0 <: a0 <: nil) pr
Int -> Bool -> Char -> [Char]
ermArgs> :t perm (a0 <: a1 <: nil) pr
Int -> Char -> Bool -> [Char]
PermArgs> :t perm (a1 <: a0 <: nil) pr
Bool -> Int -> Char -> [Char]
PermArgs> :t perm (a1 <: a1 <: nil) pr
Char -> Int -> Bool -> [Char]
PermArgs> :t perm (a2 <: a0 <: nil) pr
Bool -> Char -> Int -> [Char]
PermArgs> :t perm (a2 <: a1 <: nil) pr
Char -> Bool -> Int -> [Char]
Cheers, Ralf
---
> module PermArgs where
Natural numbers.
> data Zero = Zero
> data Succ nat = Succ nat
Some syntactic sugar.
> a0 = Zero
> a1 = Succ a0
> a2 = Succ a1
> a3 = Succ a2
Inserting first argument.
> class Insert nat a x y | nat a x -> y, nat y -> a x where
> insert :: nat -> (a -> x) -> y
>
> instance Insert Zero a x (a -> x) where
> insert Zero f = f
> instance (Insert nat a x y) => Insert (Succ nat) a (b -> x) (b -> y) where
> insert (Succ n) f a = insert n (flip f a)
Some test data.
> pr :: Int -> Bool -> Char -> String
> pr i b c = show i ++ show b ++ show c
An example session (using `ghci -fglasgow-exts PermArg.lhs'):
PermArgs> :t insert a0 pr
Int -> Bool -> Char -> String
PermArgs> :t insert a1 pr
Bool -> Int -> Char -> String
PermArgs> :t insert a2 pr
Bool -> Char -> Int -> [Char]
PermArgs> :t insert a3 pr
No instance for `Insert (Succ Zero) Int String y'
arising from use of `insert' at <No locn>
Lists.
> data Nil = Nil
> data Cons nat list = Cons nat list
Some syntactic sugar.
> infixr 5 <:
> (<:) = Cons
> nil = a0 <: Nil
Permuting arguments.
> class Perm list x y | list x -> y, list y -> x where
> perm :: list -> x -> y
>
> instance Perm Nil x x where
> perm Nil f = f
> instance (Insert n a y z, Perm list x y) => Perm (Cons n list) (a -> x) z where
> perm (Cons d ds) f = insert d (\a -> perm ds (f a))
An example session (using `ghci -fglasgow-exts PermArg.lhs'):
PermArgs> :t perm (a0 <: a0 <: nil) pr
Int -> Bool -> Char -> [Char]
PermArgs> :t perm (a0 <: a1 <: nil) pr
Int -> Char -> Bool -> [Char]
PermArgs> :t perm (a1 <: a0 <: nil) pr
Bool -> Int -> Char -> [Char]
PermArgs> :t perm (a1 <: a1 <: nil) pr
Char -> Int -> Bool -> [Char]
PermArgs> :t perm (a2 <: a0 <: nil) pr
Bool -> Char -> Int -> [Char]
PermArgs> :t perm (a2 <: a1 <: nil) pr
Char -> Bool -> Int -> [Char]
PermArgs> :t perm (a2 <: a2 <: nil) pr
No instance for `Insert (Succ Zero) Int y y1'
arising from use of `perm' at <No locn>
No instance for `Insert (Succ Zero) Bool [Char] y'
arising from use of `perm' at <No locn>
From brk@jenkon.com Mon May 14 21:36:23 2001
From: brk@jenkon.com (brk@jenkon.com)
Date: Mon, 14 May 2001 13:36:23 -0700
Subject: Functional programming in Python
Message-ID: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
> -----Original Message-----
> From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> Sent: Wednesday, May 09, 2001 12:57 AM
> To: brk@jenkon.com
> Cc: haskell-cafe@haskell.org
> Subject: RE: Functional programming in Python
>
[Bryn Keller] [snip]
>
> > and I have to agree with Dr. Mertz - I find
> > Haskell much more palatable than Lisp or Scheme. Many (most?) Python
> > programmers also have experience in more typeful languages (typically at
> > least C, since that's how one writes Python extension modules) so
> perhaps
> > that's not as surprising as it might seem.
>
> Ok, but there are worlds between C's type system and
> Haskell's.[1]
>
[Bryn Keller]
Absolutely! C's type system is not nearly so powerful or unobtrusive
as Haskell's.
> > Type inference (to my mind at least) fits the Python mindset very
> > well.
>
> So, how about the following conjecture? Types essentially
> only articulate properties about a program that a good
> programmer would be aware of anyway and would strive to
> reinforce in a well-structured program. Such a programmer
> might not have many problems with a strongly typed language.
[Bryn Keller]
I would agree with this.
> Now, to me, Python has this image of a well designed
> scripting language attracting the kind of programmer who
> strives for elegance and well-structured programs. Maybe
> that is a reason.
[Bryn Keller]
This, too. :-)
[Bryn Keller] [snip]
> Absolutely. In fact, you have just pointed out one of the
> gripes that I have with most Haskell texts and courses. The
> shunning of I/O in textbooks is promoting the image of
> Haskell as a purely academic exercise. Something which is
> not necessary at all, I am teaching an introductory course
> with Haskell myself and did I/O in Week 5 out of 14 (these
> are students without any previous programming experience).
> Moreover, IIRC Paul Hudak's book
> also introduces I/O early.
>
> In other words, I believe that this a problem with the
> presentation of Haskell and not with Haskell itself.
>
> Cheers,
> Manuel
>
> [1] You might wonder why I am pushing this point. It is
> just because the type system seems to be a hurdle for
> some people who try Haskell. I am curious to understand
> why it is a problem for some and not for others.
[Bryn Keller]
Since my first mesage and your and Simon Peyton-Jones' response,
I've taken a little more time to work with Haskell, re-read Tackling the
Awkward squad, and browsed the source for Simon Marlow's web server, and
it's starting to feel more comfortable now. In the paper and in the server
souce, there is certainly a fair amount of IO work happening, and it all
looks fairly natural and intuitive.
Mostly I find when I try to write code following those examples (or
so I think!), it turns out to be not so easy, and the real difficulty is
that I can't even put my finger on why it's troublesome. I try many
variations on a theme - some work, some fail, and often I can't see why. I
should have kept all the versions of my program that failed for reasons I
didn't understand, but unfortunately I didn't... The only concrete example
of something that confuses me I can recall is the fact that this compiles:
main = do allLines <- readLines; putStr $ unlines allLines
where readLines = do
eof <- isEOF
if eof then return [] else
do
line <- getLine
allLines <- readLines
return (line : allLines)
but this doesn't:
main = do putStr $ unlines readLines
where readLines = do
eof <- isEOF
if eof then return [] else
do
line <- getLine
allLines <- readLines
return (line : allLines)
Evidently this is wrong, but my intuition is that <- simply binds a
name to a value, and that:
foo <- somefunc
bar foo
should be identical to:
bar somefunc
That was one difficulty. Another was trying to figure out what the $
sign was for. Finally I realized it was an alternative to parentheses,
necessary due to the extremely high precedence of function application in
Haskell. That high precedence is also disorienting, by the way. What's the
rationale behind it?
Struggling along, but starting to enjoy the aesthetics of Haskell,
Bryn
p.s. What data have your students' reactions given you about what is
and is not difficult for beginners to grasp?
From jcab@roningames.com Tue May 15 04:26:21 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Mon, 14 May 2001 20:26:21 -0700
Subject: Things and limitations...
In-Reply-To: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com
>
Message-ID: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Hi. First of all, I'm new to Haskell, so greetings to all listeners.
And I come from the oh-so-ever-present world of C/C++ and such. Thinking in
Haskell is quite different, so if you see me thinking in the wrong way,
please, do point it out.
That said, I want to make it clear that I'm seriously trying to
understand the inner workings of a language like Haskell (I've already
implemented a little toy lazy evaluator, which was great in understanding
how this can all work).
Only recently have I come to really understand (I hope) what a monad is
(and it was extremely elusive for a while). It was a real struggle for a
while :-)
At 01:36 PM 5/14/2001 -0700, Bryn Keller wrote:
>The only concrete example
>of something that confuses me I can recall is the fact that this compiles:
>
> main = do allLines <- readLines; putStr $ unlines allLines
> where readLines = do
> eof <- isEOF
> if eof then return [] else
> do
> line <- getLine
> allLines <- readLines
> return (line : allLines)
>
> but this doesn't:
>
> main = do putStr $ unlines readLines
> where readLines = do
> eof <- isEOF
> if eof then return [] else
> do
> line <- getLine
> allLines <- readLines
> return (line : allLines)
>
> Evidently this is wrong, but my intuition is that <- simply binds a
>name to a value, and that:
>
> foo <- somefunc
> bar foo
>
> should be identical to:
>
> bar somefunc
Yes. I'd even shorten that to:
--- Valid
readLines = do
eof <- isEOF
if eof then ...
---
as opposed to:
--- invalid
readLines = do
if isEOF then ...
---
The reason behind this is, evidently, due to the fact that the
do-notation is just a little bit of syntactic sugar for monads. It can't
"look into" the parameter to "if" to do the monad transfer. In fact, even
if it could look into the if, it wouldn't work without heavy processing. It
would need to do it EXACTLY in that manner (providing a hidden binding
before expression that uses the bound value).
And you'd still have lots of problems dealing with order of execution.
Just think of this example:
---
myfunction = do
if readChar > readChar then ...
---
our hypothetical smarter-do-notation would need to generate one of the
following:
---
myfunction = do
char1 <- readChar
char2 <- readChar
if char1 < char2 then ...
---
or:
---
myfunction = do
char2 <- readChar
char1 <- readChar
if char1 < char2 then ...
---
but which is the correct? In this case, you might want to define rules
saying that the first is 'obviously' the correct one. But with more complex
operations and expressions it might not be possible.
Or you might want to leave it ambiguous. But that is quite against the
spirit of Haskell, I believe.
In any case, forcing the programmer to be more explicit in these
matters is, I believe, a good thing. Same as not allowing circular
references between modules, for example.
Anyway... I have been toying a bit with Haskell lately, and I have
several questions:
First, about classes of heavily parametric types. Can't be done, I
believe. At least, I haven't been able to. What I was trying to do (as an
exercise to myself) was reconverting Graham Hutton and Erik Meijer's
monadic parser library into a class. Basically, I was trying to convert the
static:
---
newtype Parser a = P (String -> [(a,String)])
item :: Parser Char
force :: Parser a -> Parser a
first :: Parser a -> Parser a
papply :: Parser a -> String -> [(a,String)]
---
---
class (MonadPlus (p s v)) => Parser p where
item :: p s v v
force :: p s v a -> p s v a
first :: p s v a -> p s v a
papply :: p s v a -> s -> [(a,s)]
---
I have at home the actual code I tried to make work, so I can't just
copy/paste it, but it looked something like this. Anyway, this class would
allow me to define parsers that parse any kind of thing ('s', which was
'String' in the original lib), from which you can extract any kind of
element ('v', which was 'Char') and parse it into arbitrary types (the
original parameter 'a'). For example, with this you could parse, say, a
recursive algebraic data structure into something else.
Nhc98 wouldn't take it. I assume this is NOT proper Haskell. The
questions are: Is this doable? If so, how? Is this not recommendable? If
not, why?
I had an idea about how to make this much more palatable. It would be
something like:
---
class (MonadPlus p) => Parser p where
type Source
type Value
item :: p Value
force :: p a -> p a
first :: p a -> p a
papply :: p a -> Source -> [(a,Source)]
---
So individual instances of Parser would define the actual type aliases
Source and Value. Again, though, this is NOT valid Haskell.
Questions: Am I being unreasonable here? Why?
Ok, last, I wanted to alias a constructor. So:
---
module MyModule(Type, TypeCons) where
newtype Type = TypeCons Integer
instance SomeClass Type where
....
---
---
module Main where
import MyModule
newtype NewType = NewTypeCons Type
---
So, now, if I want to construct a NewType, I need to do something like:
---
kk = NewTypeCons (TypeCons 5)
---
And if I want to pattern-match a NewType value, I have to use both
constructors again. It's quite a pain. I've tried to make a constructor
that can do it in one shot, but I've been unable. Tried things like:
---
AnotherCons i = NewTypeCons (TypeCons i)
---
but nothing works. Again, the same questions: Is it doable? Am I being
unreasonable here?
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From jcab@roningames.com Tue May 15 04:46:20 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Mon, 14 May 2001 20:46:20 -0700
Subject: Databases
In-Reply-To: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
References: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com >
Message-ID: <4.3.2.7.2.20010514203929.02094d70@207.33.235.243>
Is there an efficient way to make simple databases in Haskell? I mean
something like a dictionary, hash table or associative container of some kind.
I'm aware that Haskell being pure functional means that those things
are not as easily implemented as they can be in other languages, in fact,
I've implemented a simple one myself, using a list of pairs (key,value)
(which means it's slow on lookup) and an optional monad to handle the
updates/lookups.
I guess what I'm wondering is what has been done in this respect. There
is no such thing in the standard library, as far as I can see, and my
search through the web has turned up nothing.
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From JAP97003@uconnvm.uconn.edu Tue May 15 05:12:19 2001
From: JAP97003@uconnvm.uconn.edu (Justin: Member Since 1923)
Date: Tue, 15 May 2001 00:12:19 EDT
Subject: Databases
Message-ID: <20010515040400.C09AB255AB@www.haskell.org>
>
> Is there an efficient way to make simple databases in Haskell? I
mean
>something like a dictionary, hash table or associative container of
some kind.
>
> I'm aware that Haskell being pure functional means that those
things
>are not as easily implemented as they can be in other languages, in
fact,
>I've implemented a simple one myself, using a list of pairs
(key,value)
>(which means it's slow on lookup) and an optional monad to handle the
>updates/lookups.
>
> I guess what I'm wondering is what has been done in this respect.
There
>is no such thing in the standard library, as far as I can see, and my
>search through the web has turned up nothing.
Chris Okasaki has developed a whole mess of purely function data
structures. He has a book:
http://www.cs.columbia.edu/~cdo/papers.html#cup98
Maybe this is what you're looking for?
HTH
-Justin
From Tom.Pledger@peace.com Tue May 15 05:50:02 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Tue, 15 May 2001 16:50:02 +1200
Subject: Things and limitations...
In-Reply-To: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
References: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
<4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Message-ID: <15104.46458.982417.425584@waytogo.peace.co.nz>
Juan Carlos Arevalo Baeza writes:
:
| First, about classes of heavily parametric types. Can't be done, I
| believe. At least, I haven't been able to. What I was trying to do (as an
| exercise to myself) was reconverting Graham Hutton and Erik Meijer's
| monadic parser library into a class. Basically, I was trying to convert the
| static:
|
| ---
| newtype Parser a = P (String -> [(a,String)])
| item :: Parser Char
| force :: Parser a -> Parser a
| first :: Parser a -> Parser a
| papply :: Parser a -> String -> [(a,String)]
| ---
|
| ---
| class (MonadPlus (p s v)) => Parser p where
| item :: p s v v
| force :: p s v a -> p s v a
| first :: p s v a -> p s v a
| papply :: p s v a -> s -> [(a,s)]
| ---
|
| I have at home the actual code I tried to make work, so I can't just
| copy/paste it, but it looked something like this. Anyway, this class would
| allow me to define parsers that parse any kind of thing ('s', which was
| 'String' in the original lib), from which you can extract any kind of
| element ('v', which was 'Char') and parse it into arbitrary types (the
| original parameter 'a'). For example, with this you could parse, say, a
| recursive algebraic data structure into something else.
|
| Nhc98 wouldn't take it. I assume this is NOT proper Haskell. The
| questions are: Is this doable? If so, how? Is this not recommendable? If
| not, why?
I did something similar recently, but took the approach of adding more
parameters to newtype Parser, rather than converting it into a class.
Here's how it begins:
type Indent = Int
type IL a = [(a, Indent)]
newtype Parser a m b = P (Indent -> IL a -> m (b, Indent, IL a))
instance Monad m => Monad (Parser a m) where
return v = P (\ind inp -> return (v, ind, inp))
(P p) >>= f = P (\ind inp -> do (v, ind', inp') <- p ind inp
let (P p') = f v
p' ind' inp')
fail s = P (\ind inp -> fail s)
instance MonadPlus m => MonadPlus (Parser a m) where
mzero = P (\ind inp -> mzero)
(P p) `mplus` (P q) = P (\ind inp -> (p ind inp `mplus` q ind inp))
item :: MonadPlus m => Parser a m a
item = P p
where
p ind [] = mzero
p ind ((x, i):inp)
| i < ind = mzero
| otherwise = return (x, ind, inp)
This differs from Hutton's and Meijer's original in these regards:
- It's generalised over the input token type: the `a' in
`Parser a m b' is not necessarily Char.
- It's generalised over the MonadPlus type in which the result is
given: the `m' in `Parser a m b' is not necessarily [].
- It's specialised for parsing with a layout rule: there's an
indentation level in the state, and each input token is expected
to be accompanied by an indentation level.
You could try something similar for your generalisations:
newtype Parser ct r = P (ct -> [(r, ct)])
-- ct: collection of tokens, r: result
instance SuitableCollection ct => Monad (Parser ct)
where ...
instance SuitableCollection ct => MonadPlus (Parser ct)
where ...
item :: Collects ct t => Parser ct t
force :: Parser ct r -> Parser ct r
first :: Parser ct r -> Parser ct r
papply :: Parser ct r -> ct -> [(r, ct)]
The `SuitableCollection' class is pretty hard to define, though.
Either it constrains its members to be list-shaped, or it prevents you
from reusing functions like `item'. Hmmm... I think I've just
stumbled across your reason for treating Parser as a class.
When the input isn't list-shaped, is the activity still called
parsing? Or is it a generalised fold (of the input type) and unfold
(of the result type)?
Regards,
Tom
From jcab@roningames.com Tue May 15 06:43:51 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Mon, 14 May 2001 22:43:51 -0700
Subject: Databases
In-Reply-To: <20010515040400.C09AB255AB@www.haskell.org>
Message-ID: <4.3.2.7.2.20010514224048.02094d70@207.33.235.243>
At 12:12 AM 5/15/2001 -0400, Justin: Member Since 1923 wrote:
> >something like a dictionary, hash table or associative container of
>some kind.
>
>Chris Okasaki has developed a whole mess of purely function data
>structures. He has a book:
>http://www.cs.columbia.edu/~cdo/papers.html#cup98
>
>Maybe this is what you're looking for?
I believe so! Thanx!
Geee... A red-black tree set in 30 lines of code... I hope it works :)
I've never done one of those myself, but it's said that they are as tricky
as they are efficient... What I'd use in C++ (STL map) is something pretty
close, and it's usually implemented as a R/B tree, so I think it'll work.
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From jcab@roningames.com Tue May 15 06:46:24 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Mon, 14 May 2001 22:46:24 -0700
Subject: Things and limitations...
In-Reply-To: <15104.46458.982417.425584@waytogo.peace.co.nz>
References: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
<61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
<4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Message-ID: <4.3.2.7.2.20010514224429.02094d70@207.33.235.243>
At 04:50 PM 5/15/2001 +1200, Tom Pledger wrote:
>I did something similar recently, but took the approach of adding more
>parameters to newtype Parser, rather than converting it into a class.
Yes, that's how I started.
>You could try something similar for your generalisations:
>
> newtype Parser ct r = P (ct -> [(r, ct)])
> -- ct: collection of tokens, r: result
This is EXACTLY how I started :)
> instance SuitableCollection ct => Monad (Parser ct)
> where ...
>
> instance SuitableCollection ct => MonadPlus (Parser ct)
> where ...
>
> item :: Collects ct t => Parser ct t
> force :: Parser ct r -> Parser ct r
> first :: Parser ct r -> Parser ct r
> papply :: Parser ct r -> ct -> [(r, ct)]
>
>The `SuitableCollection' class is pretty hard to define, though.
>Either it constrains its members to be list-shaped, or it prevents you
>from reusing functions like `item'. Hmmm... I think I've just
>stumbled across your reason for treating Parser as a class.
:) Maybe. Thanx for your help, though.
>When the input isn't list-shaped, is the activity still called
>parsing? Or is it a generalised fold (of the input type) and unfold
>(of the result type)?
Actually, I guess you can call it destructive pattern-matching.
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From chak@cse.unsw.edu.au Tue May 15 06:46:23 2001
From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty)
Date: Tue, 15 May 2001 15:46:23 +1000
Subject: Functional programming in Python
In-Reply-To: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
References: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
Message-ID: <20010515154623Y.chak@cse.unsw.edu.au>
brk@jenkon.com wrote,
> > From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> > Absolutely. In fact, you have just pointed out one of the
> > gripes that I have with most Haskell texts and courses. The
> > shunning of I/O in textbooks is promoting the image of
> > Haskell as a purely academic exercise. Something which is
> > not necessary at all, I am teaching an introductory course
> > with Haskell myself and did I/O in Week 5 out of 14 (these
> > are students without any previous programming experience).
> > Moreover, IIRC Paul Hudak's book
> > also introduces I/O early.
> >
> > In other words, I believe that this a problem with the
> > presentation of Haskell and not with Haskell itself.
>
> Since my first mesage and your and Simon Peyton-Jones' response,
> I've taken a little more time to work with Haskell, re-read Tackling the
> Awkward squad, and browsed the source for Simon Marlow's web server, and
> it's starting to feel more comfortable now. In the paper and in the server
> souce, there is certainly a fair amount of IO work happening, and it all
> looks fairly natural and intuitive.
>
> Mostly I find when I try to write code following those examples (or
> so I think!), it turns out to be not so easy, and the real difficulty is
> that I can't even put my finger on why it's troublesome. I try many
> variations on a theme - some work, some fail, and often I can't see why. I
> should have kept all the versions of my program that failed for reasons I
> didn't understand, but unfortunately I didn't... The only concrete example
> of something that confuses me I can recall is the fact that this compiles:
>
> main = do allLines <- readLines; putStr $ unlines allLines
> where readLines = do
> eof <- isEOF
> if eof then return [] else
> do
> line <- getLine
> allLines <- readLines
> return (line : allLines)
>
> but this doesn't:
>
> main = do putStr $ unlines readLines
> where readLines = do
> eof <- isEOF
> if eof then return [] else
> do
> line <- getLine
> allLines <- readLines
> return (line : allLines)
>
> Evidently this is wrong, but my intuition is that <- simply binds a
> name to a value, and that:
No, that is not the case. It does more, it executes an I/O action.
> foo <- somefunc
> bar foo
>
> should be identical to:
>
> bar somefunc
But it isn't; however, we have
do
let foo = somefunc
bar foo
is identical to
do
bar somefunc
So, this all boils down to the question, what is the
difference between
do
let foo = somefunc -- Version 1
bar foo
and
do
foo <- somefunc -- Version 2
bar foo
The short answer is that Version 2 (the arrow) executes any
side effects encoded in `somefunc', whereas Version 1 (the
let binding) doesn't do that. Expressions given as an
argument to a function behave as if they were let bound, ie,
they don't execute any side effects. This explains why the
identity that you stated above does not hold.
So, at the core is that Haskell insists on distinguishing
expressions that can have side effects from those that
cannot. This distinction makes the language a little bit
more complicated (eg, by enforcing us to distinguish between
`=' and `<-'), but it also has the benefit that both a
programmer and the compiler can immediately tell which
expressions do have side effects and which don't. For
example, this often makes it a lot easier to alter code
written by somebody else. It also makes it easier to
formally reason about code and it gives the compiler scope
for rather radical optimisations.
To reinforce the distinction, consider the following two
pieces of code (where `readLines' is the routine you defined
above):
do
let x = readLines
y <- x
z <- x
return (y ++ z)
and
do
x <- readLines
let y = x
let z = x
return (y ++ z)
How is the result (and I/O behaviour) different?
> That was one difficulty. Another was trying to figure out what the $
> sign was for. Finally I realized it was an alternative to parentheses,
> necessary due to the extremely high precedence of function application in
> Haskell. That high precedence is also disorienting, by the way. What's the
> rationale behind it?
You want to be able to write
f 1 2 + g 3 4
instead of
(f 1 2) + (g 3 4)
> p.s. What data have your students' reactions given you about what is
> and is not difficult for beginners to grasp?
They found it to be a difficult topic, but they found
"Unix/Shell scripts" even harder (and we did only simple
shell scripts). I actually made another interesting
observation (and keep in mind that for many that was their
first contact with programming). I had prepared for the
distinction between side effecting and non-side-effecting
expressions to be a hurdle in understanding I/O. What I
hand't taken into account was that the fact that they had
only worked in an interactive interpreter environment (as
opposed to, possibly compiled, standalone code) would pose
them a problem. The interactive interpreter had allowed
them to type in input and get results printed all way long,
so they didn't see why it should be necessary to complicate
a program with print statements.
I append the full break down of the student answers.
Cheers,
Manuel
-=-
Very difficult
Average
Very easy
Recursive functions
3.8%
16.1%
44.2%
25.2%
12.1%
List processing
5.2%
18%
44%
25.4%
8.8%
Pattern matching
3%
15.2%
41.4%
27.8%
14%
Association lists
4.5%
28.5%
48.5%
15.4%
4.5%
Polymorphism/overloading
10.9%
44.2%
37.8%
5.9%
2.6%
Sorting
5.7%
33.5%
47.6%
11.6%
3%
Higher-order functions
16.9%
43%
31.4%
8.5%
1.6%
Input/output
32.6%
39.7%
19.7%
7.3%
2.1%
Modules/decomposition
12.8%
37.1%
35.9%
12.1%
3.5%
Trees
29.5%
41.9%
21.9%
5.7%
2.6%
ADTs
35.9%
36.4%
20.9%
6.1%
2.1%
Unix/shell scripts
38.5%
34.7%
20.7%
5.7%
1.9%
Formal reasoning
11.1%
22.6%
31.9%
20.9%
15%
From brk@jenkon.com Tue May 15 18:16:14 2001
From: brk@jenkon.com (brk@jenkon.com)
Date: Tue, 15 May 2001 10:16:14 -0700
Subject: Functional programming in Python
Message-ID: <61AC3AD3E884D411836F0050BA8FE9F3355218@franklin.jenkon.com>
> -----Original Message-----
> From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> Sent: Monday, May 14, 2001 10:46 PM
> To: brk@jenkon.com
> Cc: haskell-cafe@haskell.org
> Subject: RE: Functional programming in Python
>
> brk@jenkon.com wrote,
>
> > > From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> > Evidently this is wrong, but my intuition is that <- simply binds a
> > name to a value, and that:
>
> No, that is not the case. It does more, it executes an I/O action.
>
[Bryn Keller] [snip]
> The short answer is that Version 2 (the arrow) executes any
> side effects encoded in `somefunc', whereas Version 1 (the
> let binding) doesn't do that. Expressions given as an
> argument to a function behave as if they were let bound, ie,
> they don't execute any side effects. This explains why the
> identity that you stated above does not hold.
>
> So, at the core is that Haskell insists on distinguishing
> expressions that can have side effects from those that
> cannot. This distinction makes the language a little bit
> more complicated (eg, by enforcing us to distinguish between
> `=' and `<-'), but it also has the benefit that both a
> programmer and the compiler can immediately tell which
> expressions do have side effects and which don't. For
> example, this often makes it a lot easier to alter code
> written by somebody else. It also makes it easier to
> formally reason about code and it gives the compiler scope
> for rather radical optimisations.
>
[Bryn Keller]
Exactly the clarification I needed, thank you!
[Bryn Keller] [snip]
> > p.s. What data have your students' reactions given you about what is
> > and is not difficult for beginners to grasp?
>
> They found it to be a difficult topic, but they found
> "Unix/Shell scripts" even harder (and we did only simple
> shell scripts). I actually made another interesting
> observation (and keep in mind that for many that was their
> first contact with programming). I had prepared for the
> distinction between side effecting and non-side-effecting
> expressions to be a hurdle in understanding I/O. What I
> hand't taken into account was that the fact that they had
> only worked in an interactive interpreter environment (as
> opposed to, possibly compiled, standalone code) would pose
> them a problem. The interactive interpreter had allowed
> them to type in input and get results printed all way long,
> so they didn't see why it should be necessary to complicate
> a program with print statements.
[Bryn Keller]
Interesting!
Thanks for your help, and for sharing your students' observations. I
always knew shell scripting was harder than it ought to be. ;-)
Bryn
From mechvel@math.botik.ru Wed May 16 11:42:13 2001
From: mechvel@math.botik.ru (S.D.Mechveliani)
Date: Wed, 16 May 2001 14:42:13 +0400
Subject: algebra proposals
Message-ID:
It appears, there was a discussion on the library proposals,
by D.Thurston, maybe, others.
And people asked for my comments. My comments are as follows.
-----------------------------------------------------------------
(1) It is misleading to call this part of the standard library
`numeric':
`numeric classes', `numeric Prelude', `Num', and so on.
Matrices and polynomials match many of the corresponding instances
but hardly can be named `numeric' objects.
This is why BAL introduces the term Basic Algebra (items).
(2) The order of considering problems matters.
First we have to decide how to program algebra and then - whether
(and how) to change the standard. The idea of the first stage
forms the skeleton for the second.
(3) As I see, we cannot decide how to program algebra in Haskell.
Hence, it is better not to touch the standard - in order to save
effort and e-mail noise.
-----------------------------------------------------------------
Comments to (2)
---------------
Anyone pretending to improve the standard has either to write any
sensible algebraic Haskell application or to study attentively
(with computing various examples) any existing one.
A `sensible application' should show how to operate in parametric
domains, not only in domains like (Integer, Bool).
Example: polynomials in [x1..xn] with coefficients over a field
(like rational numbers) quotiented by several equations.
For example, the data
r = Residue (x+y) (Ideal [x^2+3, y^3-2])
represents the mathematical expression
squareRoot(-3) + cubicRoot(2)
The basis bas = [x^2+3, y^3-2] for I = Ideal [x^2+3, y^3-2]
can be computed as the first class data.
And for many instances
( may be Fractional, Field, ...)
the domain (type?) of r matches or does not match
depending on the value of bas.
Even if the application builds the domains only statically,
when the compiler knows statically all the intended values of bas,
Haskell would still meet difficulties.
On dependent types
------------------
Maybe, Haskell has to move to dependent types.
I am not sure, but probably, unsolvability does not matter.
If the compiler fails to prove the correctness condition (on p) for
a type T(p) within the given threshold d, the compiler reports
"cannot prove ...". The user has to add annotations that help to
prove this. Or to say "solve at run-time". Or to say "I postulate
this condition on p, put the correctness on me".
With kind regards,
-----------------
Serge Mechveliani
mechvel@botik.ru *** I am not in haskell-cafe list ***
From jcab@roningames.com Thu May 17 03:20:00 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Wed, 16 May 2001 19:20:00 -0700
Subject: Things and limitations...
In-Reply-To: <15104.46458.982417.425584@waytogo.peace.co.nz>
References: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
<61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
<4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Message-ID: <4.3.2.7.2.20010516190556.03cd57f0@207.33.235.243>
At 04:50 PM 5/15/2001 +1200, Tom Pledger wrote:
> | ---
> | class (MonadPlus (p s v)) => Parser p where
> | item :: p s v v
> | force :: p s v a -> p s v a
> | first :: p s v a -> p s v a
> | papply :: p s v a -> s -> [(a,s)]
> | ---
>
>[...]
>
>The `SuitableCollection' class is pretty hard to define, though.
>Either it constrains its members to be list-shaped, or it prevents you
>from reusing functions like `item'. Hmmm... I think I've just
>stumbled across your reason for treating Parser as a class.
>
>When the input isn't list-shaped, is the activity still called
>parsing? Or is it a generalised fold (of the input type) and unfold
>(of the result type)?
Well, it looks like Justin's answer to my "Databases" thread gave me
the clue. What I want is called "Multiple parameter classes". Okasaki's
code for implementing sets uses this extension to make a Set class. It
wouldn't compile with nhc98 either, so I tried Hugs, which does support it
if extensions are enabled, and has, in its documentation, a very nice
explanation of the tradeoffs that using this extension entails.
So, what I really want is something like:
class (MonadPlus (p s v)) => Parser p s v | s -> v where
item :: p s v v
force :: p s v a -> p s v a
first :: p s v a -> p s v a
papply :: p s v a -> s -> [(a,s)]
I haven't had the time to play with this yet, but it sounds promising...
In case anyone is interested in the Hugs documentation of the feature:
http://www.cse.ogi.edu/PacSoft/projects/Hugs/pages/hugsman/exts.html#exts
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From qrczak@knm.org.pl Thu May 17 08:31:23 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 17 May 2001 07:31:23 GMT
Subject: Things and limitations...
References: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Message-ID:
Mon, 14 May 2001 20:26:21 -0700, Juan Carlos Arevalo Baeza pisze:
> class (MonadPlus (p s v)) => Parser p where
> item :: p s v v
> force :: p s v a -> p s v a
> first :: p s v a -> p s v a
> papply :: p s v a -> s -> [(a,s)]
This MonadPlus superclass can't be written. The Parser class is
overloaded only on p and must work uniformly on s and v, which can
be expressed for functions (by using s and v as here: type variables
not mentioned elsewhere), but can't for superclasses. What you want
here is this:
class (forall s v. MonadPlus (p s v)) => Parser p where
which is not supported by any Haskell implementation (but I hope it
will: it's not the first case when it would be useful).
This should work on implementations supporting multiparameter type
classes (ghc and Hugs):
class (MonadPlus (p s v)) => Parser p s v where
item :: p s v v
force :: p s v a -> p s v a
first :: p s v a -> p s v a
papply :: p s v a -> s -> [(a,s)]
Well, having (p s v) in an argument of a superclass context is not
standard too :-( Haskell98 requires types here to be type variables.
It requires that each parser type is parametrized by s and v; a
concrete parser type with hardwired String can't be made an instance of
this class, unless wrapped in a type which provides these parameters.
The best IMHO solution uses yet another extension: functional dependencies.
class (MonadPlus p) => Parser p s v | p -> s v where
item :: p v
force :: p a -> p a
first :: p a -> p a
papply :: p a -> s -> [(a,s)]
Having a fundep allows to have methods which don't have v and s in
their types. The fundep states that a single parser parses only
one type of input and only one type of tokens, so the type will
be implicitly deduced from the type of parser itself, basing on
available instances.
Well, I think that s will always be [v], so it can be simplified thus:
class (MonadPlus p) => Parser p v | p -> v where
item :: p v
force :: p a -> p a
first :: p a -> p a
papply :: p a -> [v] -> [(a,[v])]
Without fundeps it could be split into classes depending on which
methods require v:
class (MonadPlus p) => BasicParser p where
force :: p a -> p a
first :: p a -> p a
class (BasicParser p) => Parser p v where
item :: p v
papply :: p a -> [v] -> [(a,[v])]
This differs from the fundep solution that sometimes an explicit type
constraint must be used to disambiguate the type of v, because the
declaration states that the same parser could parse different types
of values. Well, perhaps this is what you want and
item :: SomeConcreteParser Char
could give one character where
item :: SomeConcreteParser (Char,Char)
gives two? In any case using item in a way which doesn't tell which
item type to use is an error.
> Ok, last, I wanted to alias a constructor. So:
There is no such thing. A constructor can't be renamed. You would
have to wrap the type inside the constructor in a new constructor.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From karczma@info.unicaen.fr Thu May 17 09:40:07 2001
From: karczma@info.unicaen.fr (Jerzy Karczmarczuk)
Date: Thu, 17 May 2001 10:40:07 +0200
Subject: BAL paper available >> graphic libraries
References:
<3B015A65.C87801F5@info.unicaen.fr>
<20010515211402.B24804@math.harvard.edu>
<3B03748C.D65FDD19@info.unicaen.fr> <15107.31057.395890.996366@tcc2>
Message-ID: <3B038E67.CBC05F55@info.unicaen.fr>
[[Perhaps, if this thread continues, it is time to move to a nice -café
at the corner.]]
Timothy Docker:
> Has anyone considered writing a haskell wrapper for SDL - Simple
> Directmedia Layer at http://www.libsdl.org ?
>
> This is a cross platform library intended for writing games, and aims
> for a high performance, low level API. It would be interesting to see
> how clean a functional API could be built around such an imperative
> framework.
Ammmmm...... I didn't want to touch this issue, but, well, indeed, I had
a look on Sam Lantinga's SDL package some time ago (I believe, a new
version exists now). I know that Johnny Andersen (somewhere near DIKU,
plenty of craziness on his ÁNOQ pages...) produced a Standard ML bindings
for that. But reading this stuff I was simply scared to death!
Using simultaneously 6 compilers, passing through "C" code, etc. - my
impression was: OK, it should work. If somebody wants to write a game,
a concrete simulation in ML, it might help him.
But, on the other hand if you want to have a decent programming platform,
enabling you to write - even (or: especially) for pedagogical purposes
some graphic tools of universal character, say,
* a model ray tracer with the "native" scene description language (i.e.
the same language for the core implementation, and for the scene/object
description, and for its scripting (animation)),
* a generic texture generator and image processing toolbox with all such
stuff as relational algebra of images, math. morphology, etc.,
* a radiosity machine
* ... dozen of other projects ...
then without a common memory management, without rebuilding the - say -
Haskell runtime *with* SDL or other low-level graphics utilities, it
might be difficult to use.
The approach taken by Clean'ers, to have an "almost intrinsic" graphic
object-oriented IO layer, and building their Game Library upon it, seems
more reasonable, and at any rate it has more sex-appeal for those who
are truly interested in practical functional programming as such, and
not just in stacking some external goodies which integrate badly with the
language.
Bother, I realized that my main contributions to this list are just
complaining. Somebody could teleport me to a desert island, with
some computers, but without hoards of students just for 6 months?
Jerzy Karczmarczuk
Caen, France
From fidel@cs.chalmers.se Thu May 17 11:08:01 2001
From: fidel@cs.chalmers.se (Pablo E. Martinez Lopez)
Date: Thu, 17 May 2001 12:08:01 +0200
Subject: Things and limitations...
References: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Message-ID: <3B03A301.6428DDAF@cs.chalmers.se>
I have made something very similar, and it worked.
That work is reported in the paper "Generic Parser Combinators"
published in the 2nd Latin-American Conference on Functional Programming
(CLaPF). You can download it from
ftp://sol.info.unlp.edu.ar/pub/papers/theory/fp/2ndCLaPF/Papers/mlopez2.ps.gz
There I have made Hutton's parsers, Fokker's parsers and Rojemo's
parsers instances of a class Parser that looks similar to what you have
attempted. But it uses multiparameter type clases.
Sadly, Swiestra's parsers cannot be made instances of this class,
because they are not monads.
I hope this will help you.
FF
Juan Carlos Arevalo Baeza wrote:
> First, about classes of heavily parametric types. Can't be done, I
> believe. At least, I haven't been able to. What I was trying to do (as an
> exercise to myself) was reconverting Graham Hutton and Erik Meijer's
> monadic parser library into a class. Basically, I was trying to convert the
> static:
>
> ---
> newtype Parser a = P (String -> [(a,String)])
> item :: Parser Char
> force :: Parser a -> Parser a
> first :: Parser a -> Parser a
> papply :: Parser a -> String -> [(a,String)]
> ---
>
> ---
> class (MonadPlus (p s v)) => Parser p where
> item :: p s v v
> force :: p s v a -> p s v a
> first :: p s v a -> p s v a
> papply :: p s v a -> s -> [(a,s)]
> ---
>
> I have at home the actual code I tried to make work, so I can't just
> copy/paste it, but it looked something like this. Anyway, this class would
> allow me to define parsers that parse any kind of thing ('s', which was
> 'String' in the original lib), from which you can extract any kind of
> element ('v', which was 'Char') and parse it into arbitrary types (the
> original parameter 'a'). For example, with this you could parse, say, a
> recursive algebraic data structure into something else.
>
> Nhc98 wouldn't take it. I assume this is NOT proper Haskell. The
> questions are: Is this doable? If so, how? Is this not recommendable? If
> not, why?
From kort@science.uva.nl Thu May 17 14:01:49 2001
From: kort@science.uva.nl (Jan Kort)
Date: Thu, 17 May 2001 15:01:49 +0200
Subject: BAL paper available >> graphic libraries
References:
<3B015A65.C87801F5@info.unicaen.fr>
<20010515211402.B24804@math.harvard.edu>
<3B03748C.D65FDD19@info.unicaen.fr> <15107.31057.395890.996366@tcc2> <3B038E67.CBC05F55@info.unicaen.fr>
Message-ID: <3B03CBBD.29BC7DC@wins.uva.nl>
Jerzy Karczmarczuk wrote:
>
> [[Perhaps, if this thread continues, it is time to move to a nice -café
> at the corner.]]
>
> Timothy Docker:
>
> > Has anyone considered writing a haskell wrapper for SDL - Simple
> > Directmedia Layer at http://www.libsdl.org ?
> >
> > This is a cross platform library intended for writing games, and aims
> > for a high performance, low level API. It would be interesting to see
> > how clean a functional API could be built around such an imperative
> > framework.
Yes, SDL is a very interesting library, I wrote a wrapper for
part of it a while ago:
http://www.science.uva.nl/~kort/sdlhs/
It's far from complete and I have no plans to work on it. But for
trying out graphics stuff it's very useful, mainly because you
can understand the whole thing, fix it if it breaks down: make
some change and recompile the whole wrapper in a minute or so.
>
> Ammmmm...... I didn't want to touch this issue, but, well, indeed, I had
> a look on Sam Lantinga's SDL package some time ago (I believe, a new
> version exists now). I know that Johnny Andersen (somewhere near DIKU,
> plenty of craziness on his ÁNOQ pages...) produced a Standard ML bindings
> for that. But reading this stuff I was simply scared to death!
> Using simultaneously 6 compilers, passing through "C" code, etc. - my
> impression was: OK, it should work. If somebody wants to write a game,
> a concrete simulation in ML, it might help him.
I tried to install tbat stuff a while ago, I spend 2
days cursing and after a friendly but unhelpful email
exchange I gave up.. I have no clue how to get that
stuff working. Sounded really cool though: an ML
implementation that is on average 3 times as fast as
SML/NJ, a lightweight C interface, support for
crosscompiling ML+SDL applications from Linux to Windows.
Maybe I should try one more time..
Jan
From korek@icm.edu.pl Sat May 19 18:06:42 2001
From: korek@icm.edu.pl (korek@icm.edu.pl)
Date: Sat, 19 May 2001 19:06:42 +0200 (CEST)
Subject: Is there version of Report merged with errata?
Message-ID:
It seems that errata for Haskell Report is independent document on the
www.haskell.org. My limited human perception forces me to ignore these
corrections completely. Has anybody got an updated(merged) version?
Greetings :-) (and thanks in advance)
Michal Gajda
korek@icm.edu.pl
From Sven.Panne@informatik.uni-muenchen.de Sat May 19 18:40:38 2001
From: Sven.Panne@informatik.uni-muenchen.de (Sven Panne)
Date: Sat, 19 May 2001 19:40:38 +0200
Subject: Is there version of Report merged with errata?
References:
Message-ID: <3B06B016.A5B09378@informatik.uni-muenchen.de>
korek@icm.edu.pl wrote:
> [...] Has anybody got an updated(merged) version?
http://research.microsoft.com/~simonpj/haskell98-revised/
Cheers,
Sven
From ketil@ii.uib.no Sun May 20 21:03:59 2001
From: ketil@ii.uib.no (Ketil Malde)
Date: 20 May 2001 22:03:59 +0200
Subject: Functional programming in Python
In-Reply-To: "Manuel M. T. Chakravarty"'s message of "Tue, 15 May 2001 15:46:23 +1000"
References: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
<20010515154623Y.chak@cse.unsw.edu.au>
Message-ID:
"Manuel M. T. Chakravarty" writes:
> You want to be able to write
> f 1 2 + g 3 4
> instead of
> (f 1 2) + (g 3 4)
I do? Personally, I find it a bit confusing, and I still often get it
wrong on the first attempt. The good thing is that the rule is simple
to remember. :-)
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
From mpj@cse.ogi.edu Mon May 21 05:43:06 2001
From: mpj@cse.ogi.edu (Mark P Jones)
Date: Sun, 20 May 2001 21:43:06 -0700
Subject: HUGS error: Unresolved overloading
In-Reply-To: <000f01c0dec1$a01ca4a0$0100a8c0@CO3003288A>
Message-ID:
Hi David,
| Can anyone shed some light on the following error? Thanks in advance.
|=20
| isSorted :: Ord a =3D> [a] -> Bool
| isSorted [] =3D True
| isSorted [x] =3D True
| isSorted (x1:x2:xs)
| | x1 <=3D x2 =3D isSorted (x2:xs)
| | otherwise =3D False
I'm branching away from your question, but hope that you might find some
additional comments useful ... the last equation in your definition can
actually be expressed more succinctly as:
isSorted (x1:x2:xs) =3D (x1 <=3D x2) && isSorted (x2:xs)
This means exactly the same thing, but, in my opinion at least, is much
clearer. In fact there's a really nice way to redo the whole definition
in one line using standard prelude functions and no explicit recursion:
isSorted xs =3D and (zipWith (<=3D) xs (tail xs))
In other words: "When is a list xs sorted? If each element in xs is
less than or equal to its successor in the list (i.e., the corresponding
element in tail xs)."
I know this definition may look obscure and overly terse if you're new
to either (a) the prelude functions used here or (b) the whole style of
programming. But once you get used to it, the shorter definition will
actually seem much clearer than the original, focusing on the important
parts and avoiding unnecessary clutter.
I don't see many emails on this list about "programming style", so this
is something of an experiment. If folks on the list find it interesting
and useful, perhaps we'll see more. But if everybody else thinks this
kind of thing is a waste of space, then I guess this may be the last
such posting!
All the best,
Mark
From laszlo@ropas.kaist.ac.kr Mon May 21 07:34:35 2001
From: laszlo@ropas.kaist.ac.kr (Laszlo Nemeth)
Date: Mon, 21 May 2001 15:34:35 +0900 (KST)
Subject: HUGS error: Unresolved overloading
In-Reply-To:
References:
Message-ID: <200105210634.PAA14039@ropas.kaist.ac.kr>
Hi Mark,
> isSorted xs = and (zipWith (<=) xs (tail xs))
> In other words: "When is a list xs sorted? If each element in xs is
> less than or equal to its successor in the list (i.e., the corresponding
> element in tail xs)."
That's right ... under cbn! At the same time David's version with
explicit recursion is fine both in Hugs and in a strict language.
I recently started using Caml for day to day work and I get bitten
because of the 'lazy mindset' at least once a week. I am not in
disagreement with you over the style, but explicit recursion in this
case avoids the problem.
Cheers,
Laszlo
PS. Why not go all the way
and . uncurry (zipWith (<=)) . id >< tail . dup
with appropriate definitions for dup and >< (prod)?
From Malcolm.Wallace@cs.york.ac.uk Mon May 21 13:55:27 2001
From: Malcolm.Wallace@cs.york.ac.uk (Malcolm Wallace)
Date: Mon, 21 May 2001 13:55:27 +0100
Subject: PP with HaXml 1.02
In-Reply-To: <9818339E731AD311AE5C00902715779C0371A76D@szrh00313.tszrh.csfb.com>
Message-ID:
> Im proceeding with basic exploratory tests with HaXml (1.02, winhugs feb 2001)
>
> the pretty print return the closing < on the next line, like
>
> > >
> where i expect
>
>
>
>
>
> what am i doing wrong ?
You are not doing anything wrong. This is the intended behaviour.
It is very important that HaXml does not add whitespace that could be
interpreted as #PCDATA within an element, given that in many cases we
do not know whether the DTD might permit free text in this position.
Whitespace inside the tag is always ignored, so it is a safe way to
display indentation without changing the meaning of the document.
Regards,
Malcolm
From pk@cs.tut.fi Tue May 22 09:32:31 2001
From: pk@cs.tut.fi (Pertti =?iso-8859-1?Q?Kellom=E4ki?=)
Date: Tue, 22 May 2001 11:32:31 +0300
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org>
Message-ID: <3B0A241F.2F7E564A@cs.tut.fi>
> From: Ketil Malde
> "Manuel M. T. Chakravarty" writes:
> > You want to be able to write
>
> > f 1 2 + g 3 4
>
> > instead of
>
> > (f 1 2) + (g 3 4)
>
> I do? Personally, I find it a bit confusing, and I still often get it
> wrong on the first attempt.
Same here. A while back someone said something along the lines that people
come to Haskell because of the syntax. For me it is the other way around.
My background is in Scheme/Lisp, and I still find it irritating that I cannot
just say indent-sexp and the like in Emacs. It is the other properties of the
language that keep me using it. I also get irritated when I get
precedence wrong, so in fact I tend to write (f 1 2) + (g 2 3), which to
my eye conveys the intended structure much better and compiles at first try.
--
pertti
From olaf@cs.york.ac.uk Tue May 22 12:40:31 2001
From: olaf@cs.york.ac.uk (Olaf Chitil)
Date: Tue, 22 May 2001 12:40:31 +0100
Subject: 'lazy mindset' was: HUGS error: Unresolved overloading
References: <200105210634.PAA14039@ropas.kaist.ac.kr>
Message-ID: <3B0A502F.EF7771DA@cs.york.ac.uk>
Laszlo Nemeth wrote:
> > isSorted xs = and (zipWith (<=) xs (tail xs))
>
> > In other words: "When is a list xs sorted? If each element in xs is
> > less than or equal to its successor in the list (i.e., the corresponding
> > element in tail xs)."
>
> That's right ... under cbn! At the same time David's version with
> explicit recursion is fine both in Hugs and in a strict language.
>
> I recently started using Caml for day to day work and I get bitten
> because of the 'lazy mindset' at least once a week. I am not in
> disagreement with you over the style, but explicit recursion in this
> case avoids the problem.
I find this remark very interesting. It reminds me of the many people
who say that they want a call-by-value language that allows call-by-need
annotations, because you rarely need call-by-need. That depends very
much on what you mean by `need call-by-need'! Mark has given a nice
example how a function can be defined concisely, taking advantage of the
call-by-need language. I very much disagree with Lazlo's comment, that
you should use explicit recursion to make the definition valid for
call-by-value languages as well. You should take full advantage of the
expressive power of your call-by-need language. Otherwise, why not avoid
higher-order functions because they are not available in other
languages?
As Lazlo says, you need a 'lazy mindset' to use this style. It takes
time to develop this 'lazy mindset' (just as it takes time to lose it
again ;-). To help people understanding and learning this 'lazy
mindset', I'd really like to see more examples such as Mark's. If there
are more examples, I could collect them on haskell.org.
> PS. Why not go all the way
>
> and . uncurry (zipWith (<=)) . id >< tail . dup
>
> with appropriate definitions for dup and >< (prod)?
It is longer, uses more functions and doesn't make the algorithm any
clearer. IMHO point-free programming and categorical combinators are not
the way to obtain readable programs.
Mark uses few standard functions. His definition is more readable than
the recursive one, because it is shorter and because it turns implicit
control into explicit data structures.
Cheers,
Olaf
--
OLAF CHITIL,
Dept. of Computer Science, University of York, York YO10 5DD, UK.
URL: http://www.cs.york.ac.uk/~olaf/
Tel: +44 1904 434756; Fax: +44 1904 432767
From chak@cse.unsw.edu.au Tue May 22 14:54:48 2001
From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty)
Date: Tue, 22 May 2001 23:54:48 +1000
Subject: Functional programming in Python
In-Reply-To: <3B0A241F.2F7E564A@cs.tut.fi>
References: <20010521160104.A5990255C2@www.haskell.org>
<3B0A241F.2F7E564A@cs.tut.fi>
Message-ID: <20010522235448A.chak@cse.unsw.edu.au>
Pertti Kellom=E4ki wrote,
> > From: Ketil Malde
> > "Manuel M. T. Chakravarty" writes:
> > > You want to be able to write
> > =
> > > f 1 2 + g 3 4
> > =
> > > instead of
> > =
> > > (f 1 2) + (g 3 4)
> > =
> > I do? Personally, I find it a bit confusing, and I still often get =
it
> > wrong on the first attempt. =
> =
> Same here. A while back someone said something along the lines that pe=
ople
> come to Haskell because of the syntax. For me it is the other way arou=
nd.
> My background is in Scheme/Lisp, and I still find it irritating that I=
cannot
> just say indent-sexp and the like in Emacs. It is the other properties=
of the
> language that keep me using it. I also get irritated when I get
> precedence wrong, so in fact I tend to write (f 1 2) + (g 2 3), which =
to
> my eye conveys the intended structure much better and compiles at firs=
t try.
In languages that don't use curring, you would write =
f (1, 2) + g (2, 3) =
which also gives application precedence over infix
operators. So, I think, we can safely say that application
being stronger than infix operators is the standard
situation.
Nevertheless, the currying notation is a matter of habit.
It took me a while to get used to it, too (as did layout).
But now, I wouldn't want to miss them anymore. And as far
as layout is concerned, I think, the Python people have made
the same experience. For humans, it is quite natural to use
visual cues (like layout) to indicate semantics.
Cheers,
Manuel
From pk@cs.tut.fi Tue May 22 15:10:40 2001
From: pk@cs.tut.fi (Kellomaki Pertti)
Date: Tue, 22 May 2001 17:10:40 +0300
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org>
<3B0A241F.2F7E564A@cs.tut.fi> <20010522235448A.chak@cse.unsw.edu.au>
Message-ID: <3B0A7360.413DDEE9@cs.tut.fi>
"Manuel M. T. Chakravarty" wrote:
> In languages that don't use curring, you would write
> f (1, 2) + g (2, 3)
> which also gives application precedence over infix
> operators. So, I think, we can safely say that application
> being stronger than infix operators is the standard
> situation.
Agreed, though you must remember that where I come from there is no
precedence at all.
> And as far
> as layout is concerned, I think, the Python people have made
> the same experience. For humans, it is quite natural to use
> visual cues (like layout) to indicate semantics.
Two points: I have been with Haskell less than half a year, and already
I have run into a layout-related bug in a tool that produces Haskell
source.
This does not raise my confidence on the approach very much.
Second, to a Lisp-head like myself something like
(let ((a 0)
(b 1))
(+ a b))
does exactly what you say: it uses layout to indicate semantic. The
parentheses
are there only to indicate semantics to the machine, and to make it easy
for
tools to pretty print the expression in such a way that the layout
reflects
the semantics as seen by the machine.
But all this is not very constructive, because Haskell is not going to
change
into a fully parenthesized prefix syntax at my wish.
--
Pertti Kellom\"aki, Tampere Univ. of Technology, Software Systems Lab
From afie@cs.uu.nl Tue May 22 15:23:25 2001
From: afie@cs.uu.nl (Arjan van IJzendoorn)
Date: Tue, 22 May 2001 16:23:25 +0200
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org> <3B0A241F.2F7E564A@cs.tut.fi> <20010522235448A.chak@cse.unsw.edu.au> <3B0A7360.413DDEE9@cs.tut.fi>
Message-ID: <010501c0e2ca$c36d1130$ec50d383@sushi>
> > For humans, it is quite natural to use
> > visual cues (like layout) to indicate semantics.
I agree, but let us not try to do that with just two (already overloaded)
symbols.
> (let ((a 0)
> (b 1))
> (+ a b))
let { a = 0; b = 1; } in a + b
is valid Haskell and the way I use the language. Enough and more descriptive
visual cues, I say.
Using layout is an option, not a rule (although the thing is called layout
rule...)
> But all this is not very constructive, because Haskell is not going to
> change into a fully parenthesized prefix syntax at my wish.
Thank god :-)
Arjan
From paul.hudak@yale.edu Tue May 22 15:36:05 2001
From: paul.hudak@yale.edu (Paul Hudak)
Date: Tue, 22 May 2001 10:36:05 -0400
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org>
<3B0A241F.2F7E564A@cs.tut.fi> <20010522235448A.chak@cse.unsw.edu.au> <3B0A7360.413DDEE9@cs.tut.fi>
Message-ID: <3B0A7955.5E1DEBB4@yale.edu>
> Two points: I have been with Haskell less than half a year, and already
> I have run into a layout-related bug in a tool that produces Haskell
> source.
Why not have your tool generate layout-less code? Surely that would be
easier to program, and be less error prone.
> Second, to a Lisp-head like myself something like
> (let ((a 0)
> (b 1))
> (+ a b))
> does exactly what you say: it uses layout to indicate semantic.
Yes, but the layout is not ENFORCED. I programmed in Lisp for many
years before switching to Haskell, and a common error is something like
this:
> (let ((a 0)
> (b 1)
> (+ a b)))
In this case the error is relatively easy to spot, but in denser code it
can be very subtle. So in fact using layout in Lisp can imply a
semantics that is simply wrong.
-Paul
From pk@cs.tut.fi Tue May 22 15:51:58 2001
From: pk@cs.tut.fi (Kellomaki Pertti)
Date: Tue, 22 May 2001 17:51:58 +0300
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org>
<3B0A241F.2F7E564A@cs.tut.fi> <20010522235448A.chak@cse.unsw.edu.au> <3B0A7360.413DDEE9@cs.tut.fi> <3B0A7955.5E1DEBB4@yale.edu>
Message-ID: <3B0A7D0E.FC77CC99@cs.tut.fi>
I realize this is a topic where it would be very easy to start a flame
war, but hopefully we can avoid that.
Paul Hudak wrote:
> Why not have your tool generate layout-less code? Surely that would be
> easier to program, and be less error prone.
The tool in question is Happy, and the error materialized as an
interaction
between the tool-generated parser code and the hand-written code in
actions.
So no, this was not an option since the tool is not written by me, and
given
my current capabilities in Haskell I could not even fix it. On the other
hand
the bug is easy to work around, and it might even be fixed in newer
versions
of Happy.
> Yes, but the layout is not ENFORCED. I programmed in Lisp for many
> years before switching to Haskell, and a common error is something like
> this:
>
> > (let ((a 0)
> > (b 1)
> > (+ a b)))
>
> In this case the error is relatively easy to spot, but in denser code it
> can be very subtle. So in fact using layout in Lisp can imply a
> semantics that is simply wrong.
Maybe I did not express my point clearly. What I was trying to say was
that
because of the syntax, it is very easy for M-C-q in Emacs to convert
that to
(let ((a 0)
(b 1)
(+ a b)))
which brings the layout of the source code to agreement with how it is
perceived
by the compiler/interpreter. So it is easy for me to enforce the layout.
This is not so much of an issue when you are writing the code in the
first place,
but I find it a pain to have to adjust indentation when I move bits of
code around
in an evolving program. If there is good support for that, then I'll
just shut up
an start using it. After all, I have only been using Haskell for a very
short
period of time.
--
pertti
From paul.hudak@yale.edu Tue May 22 15:58:35 2001
From: paul.hudak@yale.edu (Paul Hudak)
Date: Tue, 22 May 2001 10:58:35 -0400
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org>
<3B0A241F.2F7E564A@cs.tut.fi> <20010522235448A.chak@cse.unsw.edu.au> <3B0A7360.413DDEE9@cs.tut.fi> <3B0A7955.5E1DEBB4@yale.edu> <3B0A7D0E.FC77CC99@cs.tut.fi>
Message-ID: <3B0A7E9B.E31B9807@yale.edu>
> I realize this is a topic where it would be very easy to start a flame
> war, but hopefully we can avoid that.
No problem :-)
> Maybe I did not express my point clearly. What I was trying to say was
> that
> because of the syntax, it is very easy for M-C-q in Emacs to convert
> that to ...
Ok, I understand now. So clearly we just need better editing tools for
Haskell, which I guess is part of your point.
By the way, there are many Haskell programmers who prefer to write their
programs like this:
let { a = x
; b = y
; c = z
}
in ...
which arguably has its merits.
-Paul
From brk@jenkon.com Tue May 22 18:00:28 2001
From: brk@jenkon.com (brk@jenkon.com)
Date: Tue, 22 May 2001 10:00:28 -0700
Subject: Functional programming in Python
Message-ID: <61AC3AD3E884D411836F0050BA8FE9F335522A@franklin.jenkon.com>
> -----Original Message-----
> From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> Sent: Tuesday, May 22, 2001 6:55 AM
> To: pk@cs.tut.fi
> Cc: haskell-cafe@haskell.org
> Subject: Re: Functional programming in Python
>=20
> Pertti Kellom=E4ki wrote,
>=20
> > > From: Ketil Malde
> > > "Manuel M. T. Chakravarty" writes:
> > > > You want to be able to write
> > >=20
> > > > f 1 2 + g 3 4
> > >=20
> > > > instead of
> > >=20
> > > > (f 1 2) + (g 3 4)
> > >=20
> > > I do? Personally, I find it a bit confusing, and I still often =
get it
> > > wrong on the first attempt.=20
> >=20
> > Same here. A while back someone said something along the lines that
> people
> > come to Haskell because of the syntax. For me it is the other way
> around.
> > My background is in Scheme/Lisp, and I still find it irritating =
that I
> cannot
> > just say indent-sexp and the like in Emacs. It is the other =
properties
> of the
> > language that keep me using it. I also get irritated when I get
> > precedence wrong, so in fact I tend to write (f 1 2) + (g 2 3), =
which to
> > my eye conveys the intended structure much better and compiles at =
first
> try.
>=20
> In languages that don't use curring, you would write=20
>=20
> f (1, 2) + g (2, 3)=20
>=20
> which also gives application precedence over infix
> operators. So, I think, we can safely say that application
> being stronger than infix operators is the standard
> situation.
[Bryn Keller] =20
There's another piece to this question that we're overlooking, I
think. It's not just a difference (or lack thereof) in precedence, it's =
the
fact that parentheses indicate application in Python and many other
languages, and a function name without parentheses after it is a =
reference
to the function, not an application of it. This has nothing to do with
currying that I can see - you can have curried functions in Python, and =
they
still look the same. The main advantage I see for the Haskell style is
(sometimes) fewer keypresses for parentheses, but I still find it =
surprising
at times. Unfortunately in many cases you need to apply nearly as many
parens for a Haskell expression as you would for a Python one, but =
they're
in different places. It's not:
foo( bar( baz( x ) ) )
it's:
(foo ( bar (baz x) ) )
I'm not sure why folks thought this was an improvement. I suppose it
bears more resemblance to lambda calculus?
> Nevertheless, the currying notation is a matter of habit.
> It took me a while to get used to it, too (as did layout).
> But now, I wouldn't want to miss them anymore. And as far
> as layout is concerned, I think, the Python people have made
> the same experience. For humans, it is quite natural to use
> visual cues (like layout) to indicate semantics.
[Bryn Keller] =20
Absolutely. Once you get used to layout (Haskell style or Python
style), everything else looks like it was designed specifically to =
irritate
you. On the other hand, it's nice to have a brace-delimited style since =
that
makes autogenerating code a lot easier.
Bryn
> Cheers,
> Manuel
>=20
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
From heringto@cs.unc.edu Tue May 22 18:16:55 2001
From: heringto@cs.unc.edu (Dean Herington)
Date: Tue, 22 May 2001 13:16:55 -0400
Subject: Functional programming in Python
References: <61AC3AD3E884D411836F0050BA8FE9F335522A@franklin.jenkon.com>
Message-ID: <3B0A9F06.F2C28F58@cs.unc.edu>
brk@jenkon.com wrote:
> There's another piece to this question that we're overlooking, I
> think. It's not just a difference (or lack thereof) in precedence, it's the
> fact that parentheses indicate application in Python and many other
> languages, and a function name without parentheses after it is a reference
> to the function, not an application of it. This has nothing to do with
> currying that I can see - you can have curried functions in Python, and they
> still look the same. The main advantage I see for the Haskell style is
> (sometimes) fewer keypresses for parentheses, but I still find it surprising
> at times. Unfortunately in many cases you need to apply nearly as many
> parens for a Haskell expression as you would for a Python one, but they're
> in different places. It's not:
>
> foo( bar( baz( x ) ) )
> it's:
> (foo ( bar (baz x) ) )
>
> I'm not sure why folks thought this was an improvement. I suppose it
> bears more resemblance to lambda calculus?
In Haskell, one doesn't need to distinguish "a reference to the function" from
"an application of it". As a result, parentheses need to serve only a single
function, that of grouping. Parentheses surround an entire function
application, just as they surround an entire operation application:
foo (fum 1 2) (3 + 4)
I find this very consistent, simple, and elegant.
Dean
From qrczak@knm.org.pl Tue May 22 21:53:57 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 22 May 2001 20:53:57 GMT
Subject: 'lazy mindset' was: HUGS error: Unresolved overloading
References: <200105210634.PAA14039@ropas.kaist.ac.kr> <3B0A502F.EF7771DA@cs.york.ac.uk>
Message-ID:
I have a case where I don't know how to apply laziness well.
Consider mutable containers (e.g. STArray or a database on disk).
How to iterate over them? I see the following possibilities:
* Give up laziness, provide only a conversion to the list of items.
Since the conversion is monadic, it must construct the whole
list before it returns. Such conversion must be present anyway,
especially as this is the only way to ensure that we snapshot
consistent contents, and it can be argued that we usually don't
really need that much anything more sophisticated, and for some
cases we might get values by keys/indices generated in some lazy way.
* Use unsafeInterleaveIO and do a similar thing to hGetContents.
Easy to use but impure and dangerous if evaluation is not forced
early enough.
* Invent a lazy monadic sequence:
newtype Lazy m a = Lazy {getBegin :: m (Maybe (a, Lazy m a))}
(This can't be just
type Lazy m a = m (Maybe (a, Lazy m a))
because it would be a recursive type.)
It's generic: I can iterate over a collection and stop at any point.
But using it is not a pleasure. Some generic operations make sense
for this without changing the interface (filter, takeWhile), some
don't (filterM - it can't work for arbitrary monad, only for the
one used in the particular lazy sequence), and it's needed to have
separate versions of some operations, e.g.
filterLazy :: Monad m => (a -> m Bool) -> Lazy m a -> Lazy m a
which fits neither filter nor filterM. It artificially splits my
class design or requires methods implemented as error "sorry, not
applicable".
* Introduce a stateful iterator, like C++ or Java. This is ugly but
would work too. Probably just the simplest version, i.e.
getIter :: SomeMutableContainer m a -> m (m (Maybe a))
without going backwards etc.
Suggestions? I think I will do the last choice, after realizing that
Lazy m a is not that nice, but perhaps it can be done better...
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From Tom.Pledger@peace.com Tue May 22 23:23:10 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Wed, 23 May 2001 10:23:10 +1200
Subject: Recursive types?
In-Reply-To: <6605DE4621E5934593474F620A14D700190C19@red-msg-07.redmond.corp.microsoft.com>
References: <6605DE4621E5934593474F620A14D700190C19@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <15114.59086.164856.212086@waytogo.peace.co.nz>
(Bailing out to haskell-cafe because this stuff is probably old hat)
David Bakin writes:
| Thank you.
|
| Now, on seeing your Tree example I tried to use it by defining
| height using structural recursion. But I couldn't find a valid
| syntax to pattern match - or otherwise define the function - on the
| type CT. Is there one?
No. You can generalise over some structures
ctHeight :: (CT c k e -> Tree c k e) -> CT c k e -> Int
ctHeight extract ct
= case extract ct of
Empty -> 0
Node l _ r -> let hl = ctHeight extract l
hr = ctHeight extract r
in 1 + max hl hr
plainHeight = ctHeight (\(Plain t) -> t)
labelledHeight = ctHeight snd
where the role of `extract' is a simplification of that of `phi' in
section 5.1 of the paper you mentioned (MPJ's Functional Programming
with Overloading and Higher-Order Polymorphism). I'm rapidly getting
out of my depth in bananas and lenses, so won't try to be more
specific about the similarity.
Anyway, the type of ctHeight is not appropriate for all possible
structures. For example, to follow an STRef you must visit an ST:
import ST
stHeight :: CT STRef s e -> ST s Int
stHeight ct
= do t <- readSTRef ct
case t of
Empty -> return 0
Node l _ r -> do hl <- stHeight l
hr <- stHeight r
return (1 + max hl hr)
- Tom
:
| -----Original Message-----
| From: Tom Pledger [mailto:Tom.Pledger@peace.com]
:
| One advantage of such higher-order types is reusability. For example,
| this
|
| type CT c k e
| = c k (Tree c k e) -- Contained Tree, container, key, element
| data Tree c k e
| = Empty
| | Node (CT c k e) e (CT c k e)
|
| can be used with no frills
|
| newtype Plain k t = Plain t
| ... :: Tree Plain () e
|
| or with every subtree labelled
|
| ... :: Tree (,) String e
|
| or with every subtree inside the ST monad
|
| import ST
| ... :: Tree STRef s e
From tweed@compsci.bristol.ac.uk Wed May 23 10:44:04 2001
From: tweed@compsci.bristol.ac.uk (D. Tweed)
Date: Wed, 23 May 2001 10:44:04 +0100 (BST)
Subject: Templates in FPL?
In-Reply-To:
Message-ID:
On 22 May 2001, Carl R. Witty wrote:
> "D. Tweed" writes:
>
> > In my experience the C++ idiom `you only pay for what you use' (==>
> > templates are essentially type-checked macros) and the fact most compilers
> > are evolved from C compilers makes working with templates a real pain in
> > practice.
>
> I'm not sure what you mean by type-checked here. Templates are not
> type-checked at definition time, but are type-checked when they are
> used; the same is true of ordinary macros.
I was thinking in terms of (to take a really simple example)
template
void
initialiseArray(T** arr,const T& elt,int bnd)
{
for(int i=0;i,), with
the template function I get the error when passing in the parameters; with
a macro the type error would appear at the indicated line, and indeed if
by pure chance bar just happened to have a member function called
realValue then I wouldn't get one at all. In my view it's reasonable
to say that template functions are type-checked in essentially the same
way as general functions, giving type errors in terms of the source
code that you're using, whereas there's no way to ignore the fact
macros dump a lot of code at the calling point in your program (mapping
certain identifiers to local identifiers) and then type-checking in
the context of the calling point. The above code is a really contrived
example, but I do genuinely find it useful that template type-error
messages are essentially normal function type-error messages.
[As an aside to Marcin, one of my vague plans (once I've been viva'd,
etc) is to have a go at writing some Haskell code that attempts to try and
simplify g++ template type errors, e.g., taking in a nasty screen long
error message and saying
`This looks like a list ---- list* mistake'
Like everything I plan to do it may not materialise though.]
___cheers,_dave________________________________________________________
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law: however many computers
email: tweed@cs.bris.ac.uk | you have, half your time is spent
work tel: (0117) 954-5250 | waiting for compilations to finish.
From fjh@cs.mu.oz.au Wed May 23 17:41:49 2001
From: fjh@cs.mu.oz.au (Fergus Henderson)
Date: Thu, 24 May 2001 02:41:49 +1000
Subject: Templates in FPL?
In-Reply-To:
References:
Message-ID: <20010524024149.B14295@hg.cs.mu.oz.au>
On 23-May-2001, D. Tweed wrote:
> On 22 May 2001, Carl R. Witty wrote:
>
> > "D. Tweed" writes:
> >
> > > In my experience the C++ idiom `you only pay for what you use' (==>
> > > templates are essentially type-checked macros) and the fact most compilers
> > > are evolved from C compilers makes working with templates a real pain in
> > > practice.
> >
> > I'm not sure what you mean by type-checked here. Templates are not
> > type-checked at definition time, but are type-checked when they are
> > used; the same is true of ordinary macros.
>
> I was thinking in terms of (to take a really simple example)
>
> template
> void
> initialiseArray(T** arr,const T& elt,int bnd)
...
> If I try and use intialiseArray(,), with
> the template function I get the error when passing in the parameters;
In other words, *calls to* template functions are type-checked at compile time.
However, *definitions of* template functions are only type-checked when they
are instantiated.
--
Fergus Henderson | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: | -- the last words of T. S. Garp.
From tweed@compsci.bristol.ac.uk Wed May 23 20:31:34 2001
From: tweed@compsci.bristol.ac.uk (D. Tweed)
Date: Wed, 23 May 2001 20:31:34 +0100 (BST)
Subject: Templates in FPL?
In-Reply-To: <20010524024149.B14295@hg.cs.mu.oz.au>
Message-ID:
On Thu, 24 May 2001, Fergus Henderson wrote:
> > > I'm not sure what you mean by type-checked here. Templates are not
> > > type-checked at definition time, but are type-checked when they are
> > > used; the same is true of ordinary macros.
> >
> > I was thinking in terms of (to take a really simple example)
> >
> > template
> > void
> > initialiseArray(T** arr,const T& elt,int bnd)
> ...
> > If I try and use intialiseArray(,), with
> > the template function I get the error when passing in the parameters;
>
> In other words, *calls to* template functions are type-checked at compile time.
> However, *definitions of* template functions are only type-checked when they
> are instantiated.
Umm... the point I was trying to make (in an inept way) was that the
type-check error messages that you get are in the context of the original
function that you physically wrote; whereas in the macro version
rather the error is in terms of the munged source code, which can make
abstracting it to the `original erroneous source line' difficult.
___cheers,_dave________________________________________________________
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law: however many computers
email: tweed@cs.bris.ac.uk | you have, half your time is spent
work tel: (0117) 954-5250 | waiting for compilations to finish.
From p.g.hancock@swansea.ac.uk Thu May 24 12:08:15 2001
From: p.g.hancock@swansea.ac.uk (Peter Hancock)
Date: Thu, 24 May 2001 12:08:15 +0100 (BST)
Subject: Functional programming in Python
Message-ID: <15116.60319.718276.135092@cspcas.swan.ac.uk>
Hi, you said
> Unfortunately in many cases you need to apply nearly as many
> parens for a Haskell expression as you would for a Python one, but
> they're in different places. It's not:
>
> foo( bar( baz( x ) ) )
> it's:
> (foo ( bar (baz x) ) )
Clearly the outer parentheses are unnecessary in the last expression.
One undeniable advantage of (f a) is it saves parentheses.
My feeling is that the f(a) (mathematical) notation works well when
type set or handwritten, but the (f a) (combinatory logic) notation
looks better with non-proportional fonts.
In a way the f(a) notation "represents things better": the f is at a
higher parenthesis level than the a.
Peter Hancock
From p.g.hancock@swansea.ac.uk Thu May 24 12:09:02 2001
From: p.g.hancock@swansea.ac.uk (Peter Hancock)
Date: Thu, 24 May 2001 12:09:02 +0100 (BST)
Subject: Functional programming in Python
Message-ID: <15116.60366.462586.685496@cspcas.swan.ac.uk>
Hi, you said
> Unfortunately in many cases you need to apply nearly as many
> parens for a Haskell expression as you would for a Python one, but
> they're in different places. It's not:
>
> foo( bar( baz( x ) ) )
> it's:
> (foo ( bar (baz x) ) )
Clearly the outer parentheses are unnecessary in the last expression.
One undeniable advantage of (f a) is it saves parentheses.
My feeling is that the f(a) (mathematical) notation works well when
type set or handwritten, but the (f a) (combinatory logic) notation
looks better with non-proportional fonts.
In a way the f(a) notation "represents things better": the f is at a
higher parenthesis level than the a.
Peter Hancock
From peterd@availant.com Thu May 24 14:07:44 2001
From: peterd@availant.com (Peter Douglass)
Date: Thu, 24 May 2001 09:07:44 -0400
Subject: Functional programming in Python
Message-ID: <8BDAB3CD0E67D411B02400D0B79EA49ADF593D@smail01.clam.com>
Peter Hancock wrote:
> > foo( bar( baz( x ) ) )
> > it's:
> > (foo ( bar (baz x) ) )
>
> Clearly the outer parentheses are unnecessary in the last expression.
> One undeniable advantage of (f a) is it saves parentheses.
Yes and no. In
( ( ( foo bar) baz) x )
the parens can be omitted to leave
foo bar baz x
but in ( foo ( bar (baz x) ) )
You would want the following I think.
foo . bar . baz x
which does have the parens omitted, but requires the composition operator.
--PeterD
From Tom.Pledger@peace.com Thu May 24 22:06:18 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Fri, 25 May 2001 09:06:18 +1200
Subject: Functional programming in Python
In-Reply-To: <8BDAB3CD0E67D411B02400D0B79EA49ADF593D@smail01.clam.com>
References: <8BDAB3CD0E67D411B02400D0B79EA49ADF593D@smail01.clam.com>
Message-ID: <15117.30666.600561.616048@waytogo.peace.co.nz>
Peter Douglass writes:
:
| but in ( foo ( bar (baz x) ) )
|
| You would want the following I think.
|
| foo . bar . baz x
|
| which does have the parens omitted, but requires the composition
| operator.
Almost. To preserve the meaning, the composition syntax would need to
be
(foo . bar . baz) x
or
foo . bar . baz $ x
or something along those lines. I favour the one with parens around
the dotty part, and tend to use $ only when a closing paren is
threatening to disappear over the horizon.
do ...
return $ case ... of
... -- many lines
Regards,
Tom
From alex@shop.com Fri May 25 18:54:35 2001
From: alex@shop.com (S. Alexander Jacobson)
Date: Fri, 25 May 2001 13:54:35 -0400 (Eastern Daylight Time)
Subject: Functional programming in Python
In-Reply-To: <15117.30666.600561.616048@waytogo.peace.co.nz>
Message-ID:
Does anyone know why the haskell designers did not make the syntax
right associative? It would clean up a lot of stuff.
Haskell Non-Haskell
Left Associative Right Associative
foo (bar (baz (x))) foo bar baz x
foo $ bar $ baz x foo bar baz x
add (square x) (square y) add square x square y
add (square x) y add square x y
------------From Prelude----------------------
map f x (map f) x
f x (n - 1) x f x n - 1 x
f x (foldr1 f xs) f x foldr1 f xs
showChar '[' . shows x . showl xs showChar '[] shows x showl xs
You just need to read from right to left accumulating a stack of
arguments. When you hit a function that can consume some arguments, it
does so. There is an error if you end up with more than one value on
the argument stack.
-Alex-
On Fri, 25 May 2001, Tom Pledger wrote:
> Peter Douglass writes:
> :
> | but in ( foo ( bar (baz x) ) )
> |
> | You would want the following I think.
> |
> | foo . bar . baz x
> |
> | which does have the parens omitted, but requires the composition
> | operator.
>
> Almost. To preserve the meaning, the composition syntax would need to
> be
>
> (foo . bar . baz) x
>
> or
>
> foo . bar . baz $ x
>
> or something along those lines. I favour the one with parens around
> the dotty part, and tend to use $ only when a closing paren is
> threatening to disappear over the horizon.
>
> do ...
> return $ case ... of
> ... -- many lines
>
> Regards,
> Tom
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___________________________________________________________________
S. Alexander Jacobson Shop.Com
1-646-638-2300 voice The Easiest Way To Shop (sm)
From zhanyong.wan@yale.edu Fri May 25 19:00:54 2001
From: zhanyong.wan@yale.edu (Zhanyong Wan)
Date: Fri, 25 May 2001 14:00:54 -0400
Subject: Functional programming in Python
References:
Message-ID: <3B0E9DD6.F16308C0@yale.edu>
"S. Alexander Jacobson" wrote:
>
> Does anyone know why the haskell designers did not make the syntax
> right associative? It would clean up a lot of stuff.
>
> Haskell Non-Haskell
> Left Associative Right Associative
> foo (bar (baz (x))) foo bar baz x
> foo $ bar $ baz x foo bar baz x
> add (square x) (square y) add square x square y
> add (square x) y add square x y
> ------------From Prelude----------------------
> map f x (map f) x
> f x (n - 1) x f x n - 1 x
> f x (foldr1 f xs) f x foldr1 f xs
> showChar '[' . shows x . showl xs showChar '[] shows x showl xs
>
> You just need to read from right to left accumulating a stack of
> arguments. When you hit a function that can consume some arguments, it
> does so. There is an error if you end up with more than one value on
> the argument stack.
Note that in your proposal,
add square x y
is parsed as
add (square x) y
instead of
add (square (x y)),
so it's not right associative either.
As you explained, the parse of an expression depends the types of the
sub-expressions, which imo is BAD. Just consider type inference...
-- Zhanyong
From alex@shop.com Fri May 25 22:25:42 2001
From: alex@shop.com (S. Alexander Jacobson)
Date: Fri, 25 May 2001 17:25:42 -0400 (Eastern Daylight Time)
Subject: Functional programming in Python
In-Reply-To: <3B0E9DD6.F16308C0@yale.edu>
Message-ID:
On Fri, 25 May 2001, Zhanyong Wan wrote:
> As you explained, the parse of an expression depends the types of the
> sub-expressions, which imo is BAD. Just consider type inference...
Ok, your complaint is that f a b c=a b c could have type
(a->b->c)->a->b->c or type (b->c)->(a->b)->a->c depending on the arguments
passed e.g. (f head (map +2) [3]) has different type from (f add 2 3).
Admittedly, this is different from how haskell type checks now. I guess
the question is whether it is impossible to type check or whether it just
requires modification to the type checking algorithm. Does anyone know?
-Alex-
___________________________________________________________________
S. Alexander Jacobson Shop.Com
1-646-638-2300 voice The Easiest Way To Shop (sm)
From jcab@roningames.com Sat May 26 03:52:03 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Fri, 25 May 2001 19:52:03 -0700
Subject: Functional programming in Python
Message-ID: <4.3.2.7.2.20010525195107.036d67c8@207.33.235.243>
At 05:25 PM 5/25/2001 -0400, S. Alexander Jacobson wrote:
>Admittedly, this is different from how haskell type checks now. I guess
>the question is whether it is impossible to type check or whether it just
>requires modification to the type checking algorithm. Does anyone know?
I don't think so... The only ambiguity that I can think of is with
passing functions as arguments to other functions, and you showed that it
can be resolved by currying:
map f x
would have to be force-curried using parenthesis:
(map f) x
because otherwise, it would mean:
map (f x)
which is both: very wrongly typed and NOT the intention.
I like your parsing scheme. I still DO like more explicit languages
better, though (i.e. map(f, x) style, like C & Co.). Currying is cool, but
it can be kept at a conceptual level, not affecting syntax.
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From sqrtofone@yahoo.com Mon May 28 04:46:37 2001
From: sqrtofone@yahoo.com (Jay Cox)
Date: Sun, 27 May 2001 22:46:37 -0500
Subject: Funny type.
References: <20010506160103.1A2EE255CE@www.haskell.org>
Message-ID: <3B11CA1D.B2729F88@yahoo.com>
One day I was playing around with types and I came across this type:
>data S m a = Nil | Cons a (m (S m a))
The idea being one could use generic functions with whatever monad m (of
course
m doesn't need to be a monad but my original idea was to be able to make
mutable lists with some sort of monad m.)
Anyway in attempting to define a generic show instance for the above
datatype I finally came upon:
>instance (Show a, Show (m (S m a))) => Show (S m a) where
> show Nil = "Nil"
> show (Cons x y) = "Cons " ++ show x ++ " " ++ show y
which boggles my mind. But hugs +98 and ghci
-fallow-undecidable-instances both allow it to compile but when i try
>show s
on
>s = Cons 3 [Cons 4 [], Cons 5 [Cons 2 [],Cons 3 []]]
(btw yes we are "nesting" an arbitrary lists here! however structurally
it really isnt much different from any tree datatype)
we get
ERROR -
*** The type checker has reached the cutoff limit while trying to
*** determine whether:
*** Show (S [] Integer)
*** can be deduced from:
*** ()
*** This may indicate that the problem is undecidable. However,
*** you may still try to increase the cutoff limit using the -c
*** option and then try again. (The current setting is -c998)
funny thing is apparently if you set -c to an odd number (on hugs)
it gives
*** The type checker has reached the cutoff limit while trying to
*** determine whether:
*** Show Integer
*** can be deduced from:
*** ()
why would it try to deduce Show integer?
Anyway, is my instance declaration still a bit mucked up?
Also, could there be a way to give a definition of show for S [] a?
Heres my sample definition just in case anybody is curious.
>myshow Nil = "Nil"
>myshow (Cons x y) = "Cons "++ show x ++
> " [" ++ blowup y ++ "]"
> where blowup (x:y:ls) = myshow x ++ "," ++ blowup (y:ls)
> blowup (x:[]) = myshow x
> blowup [] = ""
Thanks,
Jay Cox
yet another non-academic person on this list.
From Tom.Pledger@peace.com Mon May 28 05:53:43 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Mon, 28 May 2001 16:53:43 +1200
Subject: Functional programming in Python
In-Reply-To:
References: <3B0E9DD6.F16308C0@yale.edu>
Message-ID: <15121.55767.374165.793888@waytogo.peace.co.nz>
S. Alexander Jacobson writes:
| On Fri, 25 May 2001, Zhanyong Wan wrote:
| > As you explained, the parse of an expression depends the types of the
| > sub-expressions, which imo is BAD. Just consider type inference...
Also, we can no longer take a divide-and-conquer approach to reading
code, since the syntax may depend on the types of imports.
| Ok, your complaint is that f a b c=a b c could have type
| (a->b->c)->a->b->c or type (b->c)->(a->b)->a->c depending on the arguments
| passed e.g. (f head (map +2) [3]) has different type from (f add 2 3).
|
| Admittedly, this is different from how haskell type checks now. I guess
| the question is whether it is impossible to type check or whether it just
| requires modification to the type checking algorithm. Does anyone know?
Here's a troublesome example.
module M(trouble) where
f, g :: (a -> b) -> a -> b
f = undefined
g = undefined
trouble = (.) f g
-- ((.) f) g :: (a -> b) -> a -> b
-- (.) (f g) :: (a -> b -> c) -> a -> b -> c
Regards,
Tom
From dpt@math.harvard.edu Mon May 28 06:01:44 2001
From: dpt@math.harvard.edu (Dylan Thurston)
Date: Mon, 28 May 2001 01:01:44 -0400
Subject: Funny type.
In-Reply-To: <3B11CA1D.B2729F88@yahoo.com>
References: <20010506160103.1A2EE255CE@www.haskell.org> <3B11CA1D.B2729F88@yahoo.com>
Message-ID: <20010528010144.A30648@math.harvard.edu>
On Sun, May 27, 2001 at 10:46:37PM -0500, Jay Cox wrote:
> >data S m a = Nil | Cons a (m (S m a))
>...
> >instance (Show a, Show (m (S m a))) => Show (S m a) where
> > show Nil = "Nil"
> > show (Cons x y) = "Cons " ++ show x ++ " " ++ show y
...
> >show s
> >s = Cons 3 [Cons 4 [], Cons 5 [Cons 2 [],Cons 3 []]]
> Anyway, is my instance declaration still a bit mucked up?
Hmm. To try to deduce Show (S [] Integer), the type checker reduces
it by your instance declaration to Show [S [] Integer], which reduces
to Show (S [] Integer), which reduces to...
ghci or hugs could, in theory, be slightly smarter and handle this case.
> Also, could there be a way to give a definition of show for S [] a?
Yes. You could drop the generality:
instance (Show a) => Show (S [] a) where
show Nil = "Nil"
show (Cons x y) = "Cons " ++ show x ++ " " ++ show y
Really, the context you want is something like
instance (Show a, forall b. Show b => Show (m b)) => Show (S m b) ...
if that were legal.
--Dylan
From Tom.Pledger@peace.com Mon May 28 06:15:12 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Mon, 28 May 2001 17:15:12 +1200
Subject: Funny type.
In-Reply-To: <3B11CA1D.B2729F88@yahoo.com>
References: <20010506160103.1A2EE255CE@www.haskell.org>
<3B11CA1D.B2729F88@yahoo.com>
Message-ID: <15121.57056.23979.618943@waytogo.peace.co.nz>
Jay Cox writes:
| One day I was playing around with types and I came across this type:
|
| >data S m a = Nil | Cons a (m (S m a))
:
| >instance (Show a, Show (m (S m a))) => Show (S m a) where
| > show Nil = "Nil"
| > show (Cons x y) = "Cons " ++ show x ++ " " ++ show y
|
| which boggles my mind. But hugs +98 and ghci
| -fallow-undecidable-instances both allow it to compile but when i try
|
| >show s
|
| on
|
| >s = Cons 3 [Cons 4 [], Cons 5 [Cons 2 [],Cons 3 []]]
|
| (btw yes we are "nesting" an arbitrary lists here! however structurally
| it really isnt much different from any tree datatype)
|
| we get
|
|
| ERROR -
| *** The type checker has reached the cutoff limit while trying to
| *** determine whether:
| *** Show (S [] Integer)
| *** can be deduced from:
| *** ()
| *** This may indicate that the problem is undecidable. However,
| *** you may still try to increase the cutoff limit using the -c
| *** option and then try again. (The current setting is -c998)
|
|
| funny thing is apparently if you set -c to an odd number (on hugs)
| it gives
|
|
| *** The type checker has reached the cutoff limit while trying to
| *** determine whether:
| *** Show Integer
| *** can be deduced from:
| *** ()
|
| why would it try to deduce Show integer?
It's the first subgoal of Show (S [] Integer). If the cutoff were
greater by 1, presumably it's achieved, and then that last memory cell
is reused for the second subgoal Show [S [] Integer], which in turn
has a subgoal Show (S [] Integer), which overflows... as Dylan's just
pointed out, so I'll stop now.
- Tom
From Malcolm.Wallace@cs.york.ac.uk Mon May 28 10:23:58 2001
From: Malcolm.Wallace@cs.york.ac.uk (Malcolm Wallace)
Date: Mon, 28 May 2001 10:23:58 +0100
Subject: Functional programming in Python
In-Reply-To:
Message-ID: <2F8AAC4ZEjsergoA@cs.york.ac.uk>
It seems that right-associativity is so intuitive that even the person
proposing it doesn't get it right. :-) Partial applications are a
particular problem:
> Haskell Non-Haskell
> Left Associative Right Associative
> ------------From Prelude----------------------
> f x (foldr1 f xs) f x foldr1 f xs
Wouldn't the rhs actually mean f x (foldr1 (f xs)) in current notation?
> showChar '[' . shows x . showl xs showChar '[] shows x showl xs
Wouldn't the rhs actually mean showChar '[' (shows x (showl xs))
in current notation? This is quite different to the lhs composition.
For these two examples, the correct right-associative expressions,
as far as I can tell, should be:
f x (foldr1 f xs) f x (foldr1 f) xs
showChar '[' . shows x . showl xs showChar '[' . shows x . showl xs
Regards,
Malcolm
From simonpj@microsoft.com Mon May 28 10:02:29 2001
From: simonpj@microsoft.com (Simon Peyton-Jones)
Date: Mon, 28 May 2001 02:02:29 -0700
Subject: Funny type.
Message-ID: <37DA476A2BC9F64C95379BF66BA26902D72FB0@red-msg-09.redmond.corp.microsoft.com>
You need a language extension.
Check out Section 7 of "Derivable type classes"
http://research.microsoft.com/~simonpj/Papers/derive.htm
Alas, I have not implemented the idea yet.
(Partly because no one ever reports it as a problem; you=20
are the first!)
Simon
| One day I was playing around with types and I came across this type: =20
|=20
| >data S m a =3D Nil | Cons a (m (S m a))
|=20
| The idea being one could use generic functions with whatever=20
| monad m (of course m doesn't need to be a monad but my=20
| original idea was to be able to make mutable lists with some=20
| sort of monad m.)
|=20
| Anyway in attempting to define a generic show instance for=20
| the above datatype I finally came upon:
|=20
| >instance (Show a, Show (m (S m a))) =3D> Show (S m a) where
| > show Nil =3D "Nil"
| > show (Cons x y) =3D "Cons " ++ show x ++ " " ++ show y
|=20
| which boggles my mind. But hugs +98 and ghci=20
| -fallow-undecidable-instances both allow it to compile but when i try=20
From mark@chaos.x-philes.com Mon May 28 16:11:26 2001
From: mark@chaos.x-philes.com (Mark Carroll)
Date: Mon, 28 May 2001 11:11:26 -0400 (EDT)
Subject: Multithreaded stateful software
Message-ID:
Often I've found that quite how wonderful a programming language is isn't
clear until you've used it for a non-trivial project. So, I'm still
battling on with Haskell.
One of the projects I have coming up is a multi-threaded server that
manages many clients in performing a distributed computation using a
number of computers. So, we care about state, and control flow has some
concurrent threads and is partially event-driven.
Some possibilities come to my mind:
(a) This really isn't what Haskell was designed for, and if I try to write
this in Haskell I'll never want to touch it again.
(b) This project is quite feasible in Haskell but when it's done I'll feel
I should have just used Java or something.
(c) Haskell's monads, concurrency stuff and TCP/IP libraries are really
quite powerful and useful, and I'll be happy I picked Haskell for the
task.
Does anyone have any thoughts? (-: I have a couple of symbolic computation
tasks too that use complex data structures, which I'm sure that Haskell
would be great for, but I want it to be more generally useful than that
because, although it's nice to always use the best tool for the job, it's
also nice not to be using too many languages in-house.
-- Mark
From ken@digitas.harvard.edu Mon May 28 16:41:13 2001
From: ken@digitas.harvard.edu (Ken Shan)
Date: Mon, 28 May 2001 11:41:13 -0400
Subject: Funny type.
In-Reply-To: <3B11CA1D.B2729F88@yahoo.com>; from sqrtofone@yahoo.com on Sun, May 27, 2001 at 10:46:37PM -0500
References: <20010506160103.1A2EE255CE@www.haskell.org> <3B11CA1D.B2729F88@yahoo.com>
Message-ID: <20010528114113.B5546@digitas.harvard.edu>
--IS0zKkzwUGydFO0o
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
On 2001-05-27T22:46:37-0500, Jay Cox wrote:
> >data S m a =3D Nil | Cons a (m (S m a))
>=20
> >instance (Show a, Show (m (S m a))) =3D> Show (S m a) where
> > show Nil =3D "Nil"
> > show (Cons x y) =3D "Cons " ++ show x ++ " " ++ show y
Here's how I've been handling such situations:
data S m a =3D Nil | Cons a (m (S m a))
-- "ShowF f" means that the functor f "preserves Show"
class ShowF f where
showsPrecF :: (Int -> a -> ShowS) -> (Int -> f a -> ShowS)
-- "instance ShowF []" is based on showList
instance ShowF [] where
showsPrecF sh p [] =3D showString "[]"
showsPrecF sh p (x:xs) =3D showChar '[' . sh 0 x . showl xs
where showl [] =3D showChar ']'
showl (x:xs) =3D showChar ',' . sh 0 x . showl =
xs
-- S preserves ShowF
instance (ShowF m) =3D> ShowF (S m) where
showsPrecF sh p Nil =3D showString "Nil"
showsPrecF sh p (Cons x y) =3D showString "Cons "
. sh 0 x . showChar ' ' . showsPrecF (showsPrecF sh) 0 y
-- Now we can define "instance Show (S m a)" as desired
instance (Show a, ShowF m) =3D> Show (S m a) where
showsPrec =3D showsPrecF showsPrec
You could call it the poor man's generic programming...
--=20
Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig
>>My SUV is bigger than your bike. Stay out of the damn road!
>Kiss my reflector, SUV-boy
I'm too busy sucking on my tailpipe, bike dude.
--IS0zKkzwUGydFO0o
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: For info see http://www.gnupg.org
iD8DBQE7EnGZzjAc4f+uuBURAhfTAJ4/Q6vqdt9qkViZT29jR/OSR9FTmgCgntp0
k8Ng3aJE/mPrTQnZqOPXl78=
=sZpS
-----END PGP SIGNATURE-----
--IS0zKkzwUGydFO0o--
From qrczak@knm.org.pl Mon May 28 17:13:47 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 28 May 2001 16:13:47 GMT
Subject: Functional programming in Python
References: <9etq1o$lof$1@qrnik.zagroda>
Message-ID:
Mon, 28 May 2001 10:23:58 +0100, Malcolm Wallace pisze:
> It seems that right-associativity is so intuitive that even the
> person proposing it doesn't get it right. :-)
And even those who correct them :-)
>> f x (foldr1 f xs) f x foldr1 f xs
>
> Wouldn't the rhs actually mean f x (foldr1 (f xs)) in current notation?
No: f (x (foldr1 (f xs)))
Basically Haskell's style uses curried functions, so it's essential
to be able to apply a function to multiple parameters without a number
of nested parentheses.
BTW, before I knew Haskell I exprimented with a syntax in which 'x f'
is the application of 'f' to 'x', and 'x f g' means '(x f) g'. Other
arguments can also be on the right, but in this case with parentheses,
e.g. 'x f (y)' is a function f applied to two arguments.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From jenglish@flightlab.com Mon May 28 17:21:47 2001
From: jenglish@flightlab.com (Joe English)
Date: Mon, 28 May 2001 09:21:47 -0700
Subject: Multithreaded stateful software
In-Reply-To:
References:
Message-ID: <200105281621.JAA27593@dragon.flightlab.com>
Mark Carroll wrote:
> One of the projects I have coming up is a multi-threaded server that
> manages many clients in performing a distributed computation using a
> number of computers. [...]
>
> (a) This really isn't what Haskell was designed for, and if I try to write
> this in Haskell I'll never want to touch it again.
>
> (b) This project is quite feasible in Haskell but when it's done I'll feel
> I should have just used Java or something.
>
> (c) Haskell's monads, concurrency stuff and TCP/IP libraries are really
> quite powerful and useful, and I'll be happy I picked Haskell for the
> task.
There's also:
(d) You end up learning all sorts of new things about distributed
processing (as well as Haskell) and, armed with the new knowledge,
future problems of the same nature will be easier to solve
no matter what language you use.
That's what usually happens to me.
(Personally, if I had this project coming up, I'd use it
as an excuse to finally learn Erlang...)
--Joe English
jenglish@flightlab.com
From simonpj@microsoft.com Mon May 28 17:28:07 2001
From: simonpj@microsoft.com (Simon Peyton-Jones)
Date: Mon, 28 May 2001 09:28:07 -0700
Subject: Multithreaded stateful software
Message-ID: <37DA476A2BC9F64C95379BF66BA26902D72FD0@red-msg-09.redmond.corp.microsoft.com>
| (c) Haskell's monads, concurrency stuff and TCP/IP libraries=20
| are really quite powerful and useful, and I'll be happy I=20
| picked Haskell for the task.
Definitely (c). See Simon Marlow's paper about his experience
of writing a web server (highly concurrent), and my tutorial
"Tackling the awkward squad". Both at
http://research.microsoft.com/~simonpj/papers/marktoberdorf.htm
Haskell is a great language for writing concurrent applications.
Simon
From mark@chaos.x-philes.com Mon May 28 19:02:48 2001
From: mark@chaos.x-philes.com (Mark Carroll)
Date: Mon, 28 May 2001 14:02:48 -0400 (EDT)
Subject: Multithreaded stateful software
In-Reply-To: <37DA476A2BC9F64C95379BF66BA26902D72FD0@red-msg-09.redmond.corp.microsoft.com>
Message-ID:
On Mon, 28 May 2001, Simon Peyton-Jones wrote:
(snip)
> http://research.microsoft.com/~simonpj/papers/marktoberdorf.htm
>
> Haskell is a great language for writing concurrent applications.
Thanks! That's very interesting. In a way, I guess I'm taking something of
a leap of faith: if everything goes to plan, then the code may be used for
quite some time, being extended when necessary, so I must hope that
various useful GHC extensions, perhaps slightly modified, will go on to
hopefully be preserved and maintained in some form in GHC or some other
Haskell compiler. In choice of programming language, it's hard to trade
off wanting certainty of a good, free compiler still existing in a few
years' time that can compile your code with minimal tinkering, against
wanting to actually benefit from a lot of the important programming
language research that's gone on. (-:
I get the feeling that, although experimental, a lot of the various
extensions are probably more or less the way things will go and, of
languages in its class, Haskell seems to be doing really quite well, so
I'm not all that worried; really I'm just noting the issue.
But, back to the main point: thanks very much! These papers give me some
faith that maybe Haskell is now as generally useful as I'd hoped.
-- Mark
From Tom.Pledger@peace.com Mon May 28 21:59:04 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Tue, 29 May 2001 08:59:04 +1200
Subject: Why is there a space leak here?
In-Reply-To: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout>
References: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout>
Message-ID: <15122.48152.852254.22700@waytogo.peace.co.nz>
David Bakin writes:
:
| I have been puzzling over this for nearly a full day (getting this
| reduced version from my own code which wasn't working). In
| general, how can I either a) analyze code looking for a space leak
| or b) experiment (e.g., using Hugs) to find a space leak? Thanks!
| -- Dave
a) Look at how much of the list needs to exist at any one time.
| -- This has a space leak, e.g., when reducing (length (foo1 1000000))
| foo1 m
| = take m v
| where
| v = 1 : flatten (map triple v)
| triple x = [x,x,x]
When you consume the (3N)th cell of v, you can't yet garbage collect
the Nth cell because it will be needed for generating the (3N+1)th,
(3N+2)th and (3N+3)th.
So, as you proceed along the list, about two thirds of it must be
retained in memory.
| -- This has no space leak, e.g., when reducing (length (foo2 1000000))
| foo2 m
| = take m v
| where
| v = 1 : flatten (map single v)
| single x = [x]
By contrast, when you consume the (N+1)th cell of this v, you free up
the Nth, so foo2 runs in constant space.
| -- flatten a list-of-lists
| flatten :: [[a]] -> [a]
:
Rather like concat?
Regards,
Tom
From mg169780@students.mimuw.edu.pl Mon May 28 22:25:21 2001
From: mg169780@students.mimuw.edu.pl (Michal Gajda)
Date: Mon, 28 May 2001 23:25:21 +0200 (CEST)
Subject: Why is there a space leak here?
In-Reply-To: <15122.48152.852254.22700@waytogo.peace.co.nz>
Message-ID:
On Tue, 29 May 2001, Tom Pledger wrote:
> David Bakin writes:
>
> a) Look at how much of the list needs to exist at any one time.
>
> | -- This has a space leak, e.g., when reducing (length (foo1 1000000))
> | foo1 m
> | = take m v
> | where
> | v = 1 : flatten (map triple v)
> | triple x = [x,x,x]
>
> When you consume the (3N)th cell of v, you can't yet garbage collect
> the Nth cell because it will be needed for generating the (3N+1)th,
> (3N+2)th and (3N+3)th.
>
> So, as you proceed along the list, about two thirds of it must be
> retained in memory.
Last sentence seems false. You free up Nth cell of v when you finish with
3Nth cell of result.
> | -- This has no space leak, e.g., when reducing (length (foo2 1000000))
> | foo1 m
(...the only difference below:)
> | single x = [x]
Greetings :-)
Michal Gajda
korek@icm.edu.pl
*knowledge-hungry student*
From Tom.Pledger@peace.com Mon May 28 22:48:58 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Tue, 29 May 2001 09:48:58 +1200
Subject: Why is there a space leak here?
In-Reply-To:
References: <15122.48152.852254.22700@waytogo.peace.co.nz>
Message-ID: <15122.51146.136004.405892@waytogo.peace.co.nz>
Michal Gajda writes:
| On Tue, 29 May 2001, Tom Pledger wrote:
:
| > When you consume the (3N)th cell of v, you can't yet garbage collect
| > the Nth cell because it will be needed for generating the (3N+1)th,
| > (3N+2)th and (3N+3)th.
| >
| > So, as you proceed along the list, about two thirds of it must be
| > retained in memory.
|
| Last sentence seems false. You free up Nth cell of v when you
| finish with 3Nth cell of result.
I counted from 0. Scouts' honour. Call (!!) as a witness.
;-)
From davidbak@cablespeed.com Mon May 28 23:22:10 2001
From: davidbak@cablespeed.com (David Bakin)
Date: Mon, 28 May 2001 15:22:10 -0700
Subject: Why is there a space leak here?
References: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout> <15122.48152.852254.22700@waytogo.peace.co.nz>
Message-ID: <001a01c0e7c4$a6c887e0$5900a8c0@girlsprout>
Ah, thank you for pointing out concat to me. (Oddly, without knowing about
concat, I had tried foldr1 (++) and also foldl1 (++) but got the same space
problem and so tried to 'factor it out'.)
OK, now I see what's going on - your explanation is good, thanks.
Which of the various tools built-in or added to Hugs, GHC, NHC, etc. would
help me visualize what's actually going on here? I think Hood would (using
a newer Hugs, of course, I'm going to try it). What else?
-- Dave
----- Original Message -----
From: "Tom Pledger"
To: "David Bakin"
Cc:
Sent: Monday, May 28, 2001 1:59 PM
Subject: Why is there a space leak here?
> David Bakin writes:
> :
> | I have been puzzling over this for nearly a full day (getting this
> | reduced version from my own code which wasn't working). In
> | general, how can I either a) analyze code looking for a space leak
> | or b) experiment (e.g., using Hugs) to find a space leak? Thanks!
> | -- Dave
>
> a) Look at how much of the list needs to exist at any one time.
>
> | -- This has a space leak, e.g., when reducing (length (foo1 1000000))
> | foo1 m
> | = take m v
> | where
> | v = 1 : flatten (map triple v)
> | triple x = [x,x,x]
>
> When you consume the (3N)th cell of v, you can't yet garbage collect
> the Nth cell because it will be needed for generating the (3N+1)th,
> (3N+2)th and (3N+3)th.
>
> So, as you proceed along the list, about two thirds of it must be
> retained in memory.
>
> | -- This has no space leak, e.g., when reducing (length (foo2 1000000))
> | foo2 m
> | = take m v
> | where
> | v = 1 : flatten (map single v)
> | single x = [x]
>
> By contrast, when you consume the (N+1)th cell of this v, you free up
> the Nth, so foo2 runs in constant space.
>
> | -- flatten a list-of-lists
> | flatten :: [[a]] -> [a]
> :
>
> Rather like concat?
>
> Regards,
> Tom
>
From jcab@roningames.com Tue May 29 01:53:38 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Mon, 28 May 2001 17:53:38 -0700
Subject: Type resolution problem
In-Reply-To: <20010528114113.B5546@digitas.harvard.edu>
References: <3B11CA1D.B2729F88@yahoo.com>
<20010506160103.1A2EE255CE@www.haskell.org>
<3B11CA1D.B2729F88@yahoo.com>
Message-ID: <4.3.2.7.2.20010528173801.03795668@207.33.235.243>
I'm having a little of a problem with a project of mine. Just check out
this, which is the minimum piece of code that will show this problem:
----
type MyType a = a
(+++) :: MyType a -> MyType a -> MyType a
a +++ b = a
class MyClass a where
baseVal :: a
val1 :: MyClass a => MyType a
val1 = baseVal
val2 :: MyClass a => MyType a
val2 = baseVal
-- combVal :: MyClass a => MyType a
combVal = val1 +++ val2 -- line 18 is here...
----
Trying this in Hugs returns the following error:
----
ERROR E:\JCAB\Haskell\testcv.hs:18 - Unresolved top-level overloading
*** Binding : combVal
*** Outstanding context : MyClass b
----
Now, how can it possibly say that the context "MyClass b" is
outstanding? What does this mean?
Uncommenting the type expression above clears the error. But, why can't
the compiler deduce it by itself? I mean, if a function has type shaped
like in (a -> a -> a) and it is used with parameters (cv => a) where the
constrains are identical, then the result MUST be (cv => a) too, right? Or
am I missing something here?
Don't get me wrong, I can just put type declarations everywhere in my
code. It's a good thing, too. But this problem is really nagging at me
because I don't get where the problem is.
Any ideas or pointers?
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From chak@cse.unsw.edu.au Tue May 29 05:15:58 2001
From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty)
Date: Tue, 29 May 2001 14:15:58 +1000
Subject: Multithreaded stateful software
In-Reply-To:
References:
Message-ID: <20010529141558E.chak@cse.unsw.edu.au>
Mark Carroll wrote,
> One of the projects I have coming up is a multi-threaded server that
> manages many clients in performing a distributed computation using a
> number of computers. So, we care about state, and control flow has some
> concurrent threads and is partially event-driven.
>
> Some possibilities come to my mind:
>
> (a) This really isn't what Haskell was designed for, and if I try to write
> this in Haskell I'll never want to touch it again.
>
> (b) This project is quite feasible in Haskell but when it's done I'll feel
> I should have just used Java or something.
>
> (c) Haskell's monads, concurrency stuff and TCP/IP libraries are really
> quite powerful and useful, and I'll be happy I picked Haskell for the
> task.
In my experience features such as Haskell's type system and
the ease with which you can handle higher-order functions
are extremely useful in code that has to deal with state and
concurrency.
I guess, the main problem with using Haskell for these kinds
of applications is that relatively little has been written
about them yet. SimonPJ's paper "Tackling the awkward
squad" and SimonM's Web server improved the situation, but,
for example, none of the Haskell textbooks covers these
features. Nevertheless, there are quite a number of people
now who have used Haskell in ways similar to what you need.
So, don't hesitate to ask on this or other Haskell lists if
you have questions or need example code.
Cheers,
Manuel
From qrczak@knm.org.pl Tue May 29 06:41:19 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 29 May 2001 05:41:19 GMT
Subject: Type resolution problem
References: <3B11CA1D.B2729F88@yahoo.com> <20010506160103.1A2EE255CE@www.haskell.org> <3B11CA1D.B2729F88@yahoo.com> <9evad0$rc7$1@qrnik.zagroda>
Message-ID:
Mon, 28 May 2001 17:53:38 -0700, Juan Carlos Arevalo Baeza pisze:
> Uncommenting the type expression above clears the error.
> But, why can't the compiler deduce it by itself?
Monomorphism restriction strikes again. See section 4.5.5 in the
Haskell 98 Report. A pattern binding without an explicit type signature
is considered monomorphic (must resolve to a simgle non-overloaded
type) and points of usage or defaulting rules for numeric types
determine the type (neither applies here).
IMHO monomorphism restriction should be removed (the case for bindings
of the form var = expr without type signature).
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From kahl@heraklit.informatik.unibw-muenchen.de Tue May 29 08:49:55 2001
From: kahl@heraklit.informatik.unibw-muenchen.de (kahl@heraklit.informatik.unibw-muenchen.de)
Date: 29 May 2001 07:49:55 -0000
Subject: Why is there a space leak here?
In-Reply-To: <001a01c0e7c4$a6c887e0$5900a8c0@girlsprout>
(davidbak@cablespeed.com)
References: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout> <15122.48152.852254.22700@waytogo.peace.co.nz> <001a01c0e7c4$a6c887e0$5900a8c0@girlsprout>
Message-ID: <20010529074955.4372.qmail@dionysos.informatik.unibw-muenchen.de>
David Bakin writes:
> Which of the various tools built-in or added to Hugs, GHC, NHC, etc. would
> help me visualize what's actually going on here? I think Hood would (using
> a newer Hugs, of course, I'm going to try it). What else?
I just used my old ghc-4.06 add-in ``middle-end'' ``MHA'' to generate
a HOPS module from David's message (slightly massaged, appended below),
and then used HOPS to generate two animations:
http://ist.unibw-muenchen.de/kahl/MHA/Bakin_foo1_20.ps.gz
http://ist.unibw-muenchen.de/kahl/MHA/Bakin_foo2_20.ps.gz
Hold down the space key in ghostview to get the animation effect!
The left ``fan'', present in both examples, is the result list,
and only takes up space in reality as long as it is used.
The right ``fan'', visible only in foo1, contains the cycle
of the definition of v, and represents the space leak.
The take copies cons node away from the cycle.
The HOPS input was generated automatically by an unreleased
GHC ``middle end'' that is still stuck at ghc-4.06.
The homepage of my term graph programming system HOPS is:
http://ist.unibw-muenchen.de/kahl/HOPS/
Wolfram
> module Bakin where
-- This has a space leak, e.g., when reducing (length (foo1 1000000))
> foo1 :: Int -> [Int]
> foo1 m
> = take m v
> where
> v = 1 : flatten (map triple v)
> triple x = [x,x,x]
-- This has no space leak, e.g., when reducing (length (foo2 1000000))
> foo2 :: Int -> [Int]
> foo2 m
> = take m v
> where
> v = 1 : flatten (map single v)
> single x = [x]
-- flatten a list-of-lists
> flatten :: [[a]] -> [a]
> flatten [] = []
> flatten ([]:xxs) = flatten xxs
> flatten ((x':xs'):xxs) = x' : flatten' xs' xxs
> flatten' [] xxs = flatten xxs
> flatten' (x':xs') xxs = x': flatten' xs' xxs
From koen@cs.chalmers.se Tue May 29 09:29:01 2001
From: koen@cs.chalmers.se (Koen Claessen)
Date: Tue, 29 May 2001 10:29:01 +0200 (MET DST)
Subject: Funny type.
In-Reply-To: <20010528114113.B5546@digitas.harvard.edu>
Message-ID:
Jay Cox complained that the following is not possible:
| data S m a = Nil | Cons a (m (S m a))
|
| instance (Show a, Show (m (S m a))) => Show (S m a) where
| show Nil = "Nil"
| show (Cons x y) = "Cons " ++ show x ++ " " ++ show y
Ken Shan answered:
| Here's how I've been handling such situations:
|
| data S m a = Nil | Cons a (m (S m a))
|
| -- "ShowF f" means that the functor f "preserves Show"
| class ShowF f where
| showsPrecF :: (Int -> a -> ShowS) -> (Int -> f a -> ShowS)
Actually, this class definition can be simplified to:
class ShowF f where
showsPrecF :: Show a => Int -> f a -> ShowS
And the rest of Ken's code accordingly.
/Koen.
--
Koen Claessen http://www.cs.chalmers.se/~koen
phone:+46-31-772 5424 mailto:koen@cs.chalmers.se
-----------------------------------------------------
Chalmers University of Technology, Gothenburg, Sweden
From davidbak@cablespeed.com Tue May 29 09:31:56 2001
From: davidbak@cablespeed.com (David Bakin)
Date: Tue, 29 May 2001 01:31:56 -0700
Subject: Why is there a space leak here?
References: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout> <15122.48152.852254.22700@waytogo.peace.co.nz> <001a01c0e7c4$a6c887e0$5900a8c0@girlsprout> <20010529074955.4372.qmail@dionysos.informatik.unibw-muenchen.de>
Message-ID: <00a001c0e819$da218470$5900a8c0@girlsprout>
That's a very nice visualization - exactly the kind of thing I was hoping
for. I grabbed your papers and will look over them for more information,
thanks very much for taking the trouble! The animations you sent me - and
the ones on your page - are really nice; it would be nice to have a system
like HOPS available with GHC/GHCi (I understand it is more than the
visualization system, but that's a really useful part of it).
(I also found out that Hood didn't help for this particular purpose - though
now that I see how easy it is to use I'll be using it all the time. But it
is carefully designed to show you ("observe") exactly what has been
evaluated at a given point in the program. Thus you can't use it (as far as
I can tell) to show the data structures that are accumulating that haven't
been processed yet - which is what you need to know to find a space
problem.)
-- Dave
----- Original Message -----
From:
To:
Cc:
Sent: Tuesday, May 29, 2001 12:49 AM
Subject: Re: Why is there a space leak here?
>
> David Bakin writes:
>
> > Which of the various tools built-in or added to Hugs, GHC, NHC, etc.
would
> > help me visualize what's actually going on here? I think Hood would
(using
> > a newer Hugs, of course, I'm going to try it). What else?
>
> I just used my old ghc-4.06 add-in ``middle-end'' ``MHA'' to generate
> a HOPS module from David's message (slightly massaged, appended below),
> and then used HOPS to generate two animations:
>
> http://ist.unibw-muenchen.de/kahl/MHA/Bakin_foo1_20.ps.gz
> http://ist.unibw-muenchen.de/kahl/MHA/Bakin_foo2_20.ps.gz
>
> Hold down the space key in ghostview to get the animation effect!
>
> The left ``fan'', present in both examples, is the result list,
> and only takes up space in reality as long as it is used.
> The right ``fan'', visible only in foo1, contains the cycle
> of the definition of v, and represents the space leak.
> The take copies cons node away from the cycle.
>
> The HOPS input was generated automatically by an unreleased
> GHC ``middle end'' that is still stuck at ghc-4.06.
>
> The homepage of my term graph programming system HOPS is:
>
> http://ist.unibw-muenchen.de/kahl/HOPS/
>
>
> Wolfram
>
>
>
>
> > module Bakin where
>
> -- This has a space leak, e.g., when reducing (length (foo1 1000000))
>
> > foo1 :: Int -> [Int]
> > foo1 m
> > = take m v
> > where
> > v = 1 : flatten (map triple v)
> > triple x = [x,x,x]
>
> -- This has no space leak, e.g., when reducing (length (foo2 1000000))
>
> > foo2 :: Int -> [Int]
> > foo2 m
> > = take m v
> > where
> > v = 1 : flatten (map single v)
> > single x = [x]
>
> -- flatten a list-of-lists
>
> > flatten :: [[a]] -> [a]
> > flatten [] = []
> > flatten ([]:xxs) = flatten xxs
> > flatten ((x':xs'):xxs) = x' : flatten' xs' xxs
> > flatten' [] xxs = flatten xxs
> > flatten' (x':xs') xxs = x': flatten' xs' xxs
>
From karczma@info.unicaen.fr Tue May 29 10:00:41 2001
From: karczma@info.unicaen.fr (Jerzy Karczmarczuk)
Date: Tue, 29 May 2001 11:00:41 +0200
Subject: Functional programming in Python
References: <9etq1o$lof$1@qrnik.zagroda>
Message-ID: <3B136539.220C18EB@info.unicaen.fr>
Marcin Kowalczyk:
> BTW, before I knew Haskell I exprimented with a syntax in which 'x f'
> is the application of 'f' to 'x', and 'x f g' means '(x f) g'. Other
> arguments can also be on the right, but in this case with parentheses,
> e.g. 'x f (y)' is a function f applied to two arguments.
Hmmm. An experimental syntax, you say...
Oh, say, you reinvented FORTH?
(No args in parentheses there, a function taking something at its right
simply *knows* that there is something there).
Jerzy Karczmarczuk
Caen, France
From olaf@cs.york.ac.uk Tue May 29 11:28:36 2001
From: olaf@cs.york.ac.uk (Olaf Chitil)
Date: Tue, 29 May 2001 11:28:36 +0100
Subject: Why is there a space leak here?
References: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout> <15122.48152.852254.22700@waytogo.peace.co.nz> <001a01c0e7c4$a6c887e0$5900a8c0@girlsprout> <20010529074955.4372.qmail@dionysos.informatik.unibw-muenchen.de> <00a001c0e819$da218470$5900a8c0@girlsprout>
Message-ID: <3B1379D4.232246FC@cs.york.ac.uk>
David Bakin wrote:
>
> That's a very nice visualization - exactly the kind of thing I was hoping
> for. I grabbed your papers and will look over them for more information,
> thanks very much for taking the trouble! The animations you sent me - and
> the ones on your page - are really nice; it would be nice to have a system
> like HOPS available with GHC/GHCi (I understand it is more than the
> visualization system, but that's a really useful part of it).
>
> (I also found out that Hood didn't help for this particular purpose - though
> now that I see how easy it is to use I'll be using it all the time. But it
> is carefully designed to show you ("observe") exactly what has been
> evaluated at a given point in the program. Thus you can't use it (as far as
> I can tell) to show the data structures that are accumulating that haven't
> been processed yet - which is what you need to know to find a space
> problem.)
You can use GHood,
http://www.cs.ukc.ac.uk/people/staff/cr3/toolbox/haskell/GHood/ .
It animates the evaluation process at a given point in the program.
It doesn't show unevaluated expressions, but for most purposes you don't
need to see them. Most important is to see when which subexpression is
evaluated.
(an older version of Hood, which is distributed with nhc, also has an
animation facility)
At some time we also hope to provide such an animation viewer for our
Haskell tracer Hat. The trace already contains all the necessary
information.
Ciao,
Olaf
--
OLAF CHITIL,
Dept. of Computer Science, University of York, York YO10 5DD, UK.
URL: http://www.cs.york.ac.uk/~olaf/
Tel: +44 1904 434756; Fax: +44 1904 432767
From ketil@ii.uib.no Tue May 29 21:44:38 2001
From: ketil@ii.uib.no (Ketil Malde)
Date: 29 May 2001 22:44:38 +0200
Subject: Functional programming in Python
In-Reply-To: Jerzy Karczmarczuk's message of "Tue, 29 May 2001 11:00:41 +0200"
References: <9etq1o$lof$1@qrnik.zagroda>
<3B136539.220C18EB@info.unicaen.fr>
Message-ID:
Jerzy Karczmarczuk writes:
>> BTW, before I knew Haskell I exprimented with a syntax in which 'x f'
>> is the application of 'f' to 'x', and 'x f g' means '(x f) g'.
> Hmmm. An experimental syntax, you say...
> Oh, say, you reinvented FORTH?
Wouldn't
x f g
in a Forth'ish machine mean
g(f,x) -- using "standard" math notation, for a change
rather than
g(f(x))
?
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
From me@nellardo.com Tue May 29 23:45:44 2001
From: me@nellardo.com (Brook Conner)
Date: Tue, 29 May 2001 18:45:44 -0400
Subject: Functional programming in Python
In-Reply-To:
Message-ID: <20010529224623.CD17C255AB@www.haskell.org>
On Tuesday, May 29, 2001, at 04:44 PM, Ketil Malde wrote:
> Jerzy Karczmarczuk writes:
>
> Wouldn't
> x f g
> in a Forth'ish machine mean
> g(f,x) -- using "standard" math notation, for a change
> rather than
> g(f(x))
> ?
In PostScript, a Forth derivative, it would mean g(f(x)). The
difference comes down when tokens in the input stream are
evaluated: as they are encountered or at the very end. The input
stream is a queue - first in, first out (FIFO). A language *could*
treat the input stream as a stack (LIFO), but that would require
storing the entire stream in memory before computation could begin.
As Forth-like languages are usually designed for embedded systems
with low memory and processing power, a large LIFO stack including
the entire program is contra-indicated :-) Instead PostScript (and
other Forth-likes I've seen), treat the input stream as a FIFO
queue. This way, the interpreter can handle tokens immediately and
the stack doesn't get any larger than intermediate values in
computations. And it also matches well with the serial
communication these machines usually have with the producer of the
program (there's a reason PostScript laser printers used serial
ports easily enough, not parallel ones).
So, evaluation proceeds as follows:
for each item in the stream (FIFO)
evaluate it
pop items off the stack if evaluation requires it
push result(s) on to stack
So, presuming x is a variable with value "3", and f and g are
functions of one parameter:
x is identified and its value is pushed on to the stack.
so the stream is now "f g" and the stack is now "3"
f is identified as a function. The value of x is popped off the
stack (if f needed more than one parameter, more values would be
popped off - in this case, resulting in an error from an empty
stack).
The stream is now "g" and the stack is empty. The interpreter is
loaded with the function f and the value 3.
f is evaluated with the value of x. The result is pushed onto the stack.
The stream is "g" and the stack contains the result of f(3).
g is identified as a function and the stack is popped.
The stream is empty and the stack is too. The interpreter is loaded
with the function g and the value f(3).
g is evaluated with the value f(3). The result is pushed onto the stack.
As the stream is now empty, but the stack has items in it, a
PostScript interpreter would typically print the contents of the
stack.
This is of course g(f(x), not g(f,x). If the input stream was a
stack, too, then "g" would be evaluated first. If g took two
arguments, it would produce g(f,x). If g took one argument, it
would produce g(f). If that was a function, then it would continue
with (g(f))(x), otherwise it would end with a stack containing two
items: g(f) on top of x.
The way you get g(f,x) in PostScript or other forths is quoting (as
in Lisp). PostScript handles this with a /slash for a single token
or {braces} for lists. Either can be evaluated later: if it's a
stream of tokens, then those are evaluated (FIFO). So, procedure
definition in PostScript looks like this:
/inches { 72 * } def
which pushes the symbol "inches" onto the stack and then the list
of tokens "72 *" onto the stack (PostScript's native unit is the
point, defined as 1/72 of an inch). "def" pops the top of the stack
and attaches it as a value to the variable named in the symbol next
in the stack (no symbol equals an error). Later, a statement like
"3 inches" (an unusually readable statement in a Forth-like
language :-) is equivalent to "3 {72 *}" which is equivalent to "3
72 *", or 216.
Brook
From qrczak@knm.org.pl Wed May 30 00:05:11 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 29 May 2001 23:05:11 GMT
Subject: Functional programming in Python
References: <9etq1o$lof$1@qrnik.zagroda> <3B136539.220C18EB@info.unicaen.fr> <9f19cf$24g$1@qrnik.zagroda>
Message-ID:
29 May 2001 22:44:38 +0200, Ketil Malde pisze:
> Wouldn't
> x f g
> in a Forth'ish machine mean
> g(f,x) -- using "standard" math notation, for a change
> rather than
> g(f(x))
> ?
It depends whether f changes the value at top of the stack or only
puts something there.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From Sven.Panne@informatik.uni-muenchen.de Wed May 30 13:11:08 2001
From: Sven.Panne@informatik.uni-muenchen.de (Sven Panne)
Date: Wed, 30 May 2001 14:11:08 +0200
Subject: Announcement: new HOpenGL Tutorial
References: <20010530112701.62154.qmail@web10006.mail.yahoo.com>
Message-ID: <3B14E35C.F282D1D2@informatik.uni-muenchen.de>
[ redirected to haskell-cafe ]
Ronald Legere wrote:
> Looks great! I think the community will really
> appreciate it.
And I extremely appreciate it when other people write documentation
for my stuff. :-) Thanks, great job!
> I will have to 'give it a whirl' myself, if I can
> ever get HOpenGL to compile on my solaris sparc. (I
> keep having trouble with 'test' in the makefiles. I
> must have a wierd version of test or something. I am no
> OS guru :) (If anyone has ever seen this problem, let
> me know )
Just a note: HOpenGL has been moved into GHC's fptools repository, and
the tar balls on my web page are quite old. I'm not sure if they will
work with more recent versions of GHC, so if you're totally stuck, join
the bleeding edge and compile GHC from CVS, adding --enable-hopengl to
configure's options. This builds an OpenGL package, too, but the resulting
GHC doesn't know about it, so you will have to fiddle around with GHC's
options a bit. Furthermore, the binding is not yet complete again, I guess
about 80% has been ported. Not optimal, but I'm working on it...
Alas, other Haskell systems are not yet supported, but given the recent
advances in the FFI and library area, this will hopefully change in the
near future.
Cheers,
Sven
From khaliff@astercity.net Fri May 4 08:21:54 2001
From: khaliff@astercity.net (Wojciech Moczydlowski, Jr)
Date: Fri, 4 May 2001 09:21:54 +0200 (CEST)
Subject: "Lambda Dance", Haskell polemic,...
In-Reply-To: <3AC94556.AF4B25B5@boutel.co.nz>
Message-ID:
On Tue, 3 Apr 2001, Brian Boutel wrote:
> they do the job better that the proprietary competition. Why then has
> Haskell not done as well as sendmail or apache? Perhaps because the
> battleground is different. To get people to adopt Haskell (or any other
> brian
IMO, what's also important, is an infamous memory consumption. Everybody
seems to ignore it, but by now I wouldn't use Haskell in a commercial product,
because of this little inconvenience. For me, it doesn't matter much if a
language is slow - as far as it's not very slow, it's ok. More important for
me is the predictability. I have to know how much memory my program will
eat. And in Haskell, with ghc the only sure answer is: "Very much". Things
are better with nhc, true, though I haven't tried yet to use its impressive
profiling tools. Hbc isn't alive, so there's no point in speaking about it.
The other thing, which somebody has mentioned, is a steep learning curve.
Most of us has understood and embraced the monads - in fact, I consider do
notation as one of the more important things in Haskell. Yet it took me some
time to understand it - and when I tried, I was a CS student. IMVHO monads
are a difficult concept to grasp. And simple writing to the screen or
reading text from keyboard enforces us to use IO. I don't know what to do to
make them easier to understand. Perhaps these new papers make it better.
The lack of standard arrays, with update in O(1) time, and hashtables is
another thing. Every time I write a larger program, I use lists - with an
unpleasant feeling that my favourite language forces me to use either
nonstandard extensions or uneffective solutions. I think that IO
arrays/hashtables should be in standard. Because they are in IO - they could
work efficiently. Yes, I know about MArray/IArray. Yet I couldn't find
information in sources about complexity of array operations.
And they aren't Haskell 98 compliant.
I'm also curious about what you think about the purpose of Haskell. I
personally consider it *the* language to write compilers.
But what can you do in it besided, that wouldn't be as easy in other languages?
(I'm talking about large projects, we all know about polymorphism,
higher-order function, etc.)
Wojciech Moczydlowski, Jr
From princepawn@earthlink.net Wed May 2 02:28:28 2001
From: princepawn@earthlink.net (Terrence Brannon)
Date: Tue, 1 May 2001 18:28:28 -0700
Subject: Dimensional analysis with fundeps (some physics comments)
In-Reply-To: <200104110116.SAA01777@mail4.halcyon.com>
References: <200104110116.SAA01777@mail4.halcyon.com>
Message-ID: <15087.25276.355189.814894@gargle.gargle.HOWL>
Excuse me for playing referee here, as I am looking at this discussion
from a very abstract level, as I am a beginning functional-logic
programmer. But my question (primarily for Fergus) is: "has Ashley
shown in her example a means of automating a toUnits imperative in
Haskell that must be done explicitly in Mercury?" If so, then we have
the start of the Haskell-Mercury comparison document which shows an
advantage of Haskell over Mercury.
> I'm cross-posting this to the Libraries list...
>
> At 2001-04-10 18:02, Fergus Henderson wrote:
>
> >Still, the need to insert explicit `toUnits' is
> >annoying, and it would be nice to have a system where every number was
> >already a dimensionless unit.
>
> That's easy:
>
> --
> type Unit rep = Dimensioned Zero Zero Zero rep;
>
> instance (Real a) => Real (Unit a) where
> {
> -- put instances here
> };
> --
>
> Of course you'll have to appropriately declare superclasses of Real, such
> as Num, Ord, Show, Eq etc.
>
> --
> Ashley Yakeley, Seattle WA
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
From andrew@andrewcooke.free-online.co.uk Wed May 2 22:44:12 2001
From: andrew@andrewcooke.free-online.co.uk (andrew@andrewcooke.free-online.co.uk)
Date: Wed, 2 May 2001 22:44:12 +0100
Subject: Interesting: "Lisp as a competitive advantage"
In-Reply-To: <3AEF5B9F.88D92A76@ia.nsc.com>; from senganb@ia.nsc.com on Tue, May 01, 2001 at 08:58:07PM -0400
References: <3AEF5B9F.88D92A76@ia.nsc.com>
Message-ID: <20010502224412.B329@liron>
Two things in that interested me. First, the comment that "runtime
typing" is becoming more popular - did he mean strong rather than weak
typing, or dynamic rather than static?
Second, more interestingly, I was surprised at his emphasis on macros.
Having read his (excellent) On Lisp maybe I shouldn't have been (since
that is largely about macros), but anyway, I think it's interesting
because it's one of the big differences between Lisp and the
statically typed languages (STLs).
I have used neither Lisp nor Haskell (or ML) long enough to make a
sound judgementon this, so I'd like to hear other views. My first
thought is that higher order functions are easier to manage in STLs
and might provide some compensation. Also, I'm aware that (limited?)
code manipulation is possible in some STLs (there's an ML library
whose name I've forgotten). How do these compare?
I realise I could get more response by a usenet post to c.l.f and
c.l.l, but I'm hoping there might be more light here. Apologies if
it's too off-topic.
Thanks,
Andrew
On Tue, May 01, 2001 at 08:58:07PM -0400, Sengan wrote:
> http://www.paulgraham.com/paulgraham/avg.html
>
> I wonder how Haskell compares in this regard.
> Any comment from Haskell startups? (eg Galois Connections)
>
> Sengan
>
>
> _______________________________________________
> Haskell mailing list
> Haskell@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell
>
--
http://www.andrewcooke.free-online.co.uk/index.html
From bernardp@cli.di.unipi.it Thu May 3 08:40:02 2001
From: bernardp@cli.di.unipi.it (Pierpaolo BERNARDI)
Date: Thu, 3 May 2001 09:40:02 +0200 (MET DST)
Subject: Interesting: "Lisp as a competitive advantage"
In-Reply-To: <20010502224412.B329@liron>
Message-ID:
On Wed, 2 May 2001 andrew@andrewcooke.free-online.co.uk wrote:
> Second, more interestingly, I was surprised at his emphasis on macros.
> Having read his (excellent) On Lisp maybe I shouldn't have been (since
> that is largely about macros), but anyway, I think it's interesting
> because it's one of the big differences between Lisp and the
> statically typed languages (STLs).
Here's a lisper's opinion:
I think that the importance of macros is overrated in the lisp community.
Macros are great if all you have is a traditional, first order language,
they are not so essential if you have HOFs. In Haskell there are better
ways than macros to solve the problems macros solve in Lisp.
The real productivity booster of Lisp is its syntax. The fact that Lisp
syntax makes easy to implement a macro system is a secondary benefit.
And syntax is, IMHO, the point that more modern functional languages get
wrong (yes, I know that this is controversial, and many non-lispers have
strong objections to this statement. No need to remember me).
Pierpaolo (ducking)
From nr@eecs.harvard.edu Thu May 3 15:16:37 2001
From: nr@eecs.harvard.edu (Norman Ramsey)
Date: Thu, 03 May 2001 10:16:37 -0400
Subject: Interesting: "Lisp as a competitive advantage"
In-Reply-To: Message from Sengan
of "Tue, 01 May 2001 20:58:07 EDT." <3AEF5B9F.88D92A76@ia.nsc.com>
Message-ID: <200105031416.f43EGbl30938@wally.eecs.harvard.edu>
> http://www.paulgraham.com/paulgraham/avg.html
>
> I wonder how Haskell compares in this regard.
I loved Graham's characterization of the hierarchy of power in
programming languages:
- Languages less powerful than the one you understand look impoverished
- Languages more powerful than the one you understand look weird
When I compare Lisp and Haskell, the big question in my mind is this:
is lazy evaluation sufficient to make up for the lack of macros?
I would love to hear from a real Lisp macro hacker who has also done
lazy functional progrmaming.
Norman
From erik@meijcrosoft.com Thu May 3 17:08:59 2001
From: erik@meijcrosoft.com (Erik Meijer)
Date: Thu, 3 May 2001 09:08:59 -0700
Subject: Interesting: "Lisp as a competitive advantage"
References: <200105031416.f43EGbl30938@wally.eecs.harvard.edu>
Message-ID: <005901c0d3eb$5e5025a0$6f0c1cac@redmond.corp.microsoft.com>
----- Original Message -----
From: "Norman Ramsey"
To:
Sent: Thursday, May 03, 2001 7:16 AM
Subject: Re: Interesting: "Lisp as a competitive advantage"
> > http://www.paulgraham.com/paulgraham/avg.html
> >
> > I wonder how Haskell compares in this regard.
>
> I loved Graham's characterization of the hierarchy of power in
> programming languages:
>
> - Languages less powerful than the one you understand look impoverished
> - Languages more powerful than the one you understand look weird
Same for me; although you should not fall into the trap of reversing it, ie
if the language looks weird is is more powerful!
> When I compare Lisp and Haskell, the big question in my mind is this:
> is lazy evaluation sufficient to make up for the lack of macros?
Don't you get dynamic scoping as well with macros?
> I would love to hear from a real Lisp macro hacker who has also done
> lazy functional progrmaming.
>
>
> Norman
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
From Alan@LCS.MIT.EDU Thu May 3 20:14:07 2001
From: Alan@LCS.MIT.EDU (Alan Bawden)
Date: Thu, 3 May 2001 15:14:07 -0400 (EDT)
Subject: Haskell-Cafe digest, Vol 1 #122 - 3 msgs
In-Reply-To: <20010503160105.DA219255FB@www.haskell.org>
(haskell-cafe-request@haskell.org)
References: <20010503160105.DA219255FB@www.haskell.org>
Message-ID: <3May2001.151316.Alan@LCS.MIT.EDU>
Subject: Re: Interesting: "Lisp as a competitive advantage"
Date: Thu, 03 May 2001 10:16:37 -0400
From: Norman Ramsey
> http://www.paulgraham.com/paulgraham/avg.html
>
> I wonder how Haskell compares in this regard.
I loved Graham's characterization of the hierarchy of power in
programming languages:
- Languages less powerful than the one you understand look impoverished
- Languages more powerful than the one you understand look weird
When I compare Lisp and Haskell, the big question in my mind is this:
is lazy evaluation sufficient to make up for the lack of macros?
I would love to hear from a real Lisp macro hacker who has also done
lazy functional progrmaming.
Norman
--__--__--
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
End of Haskell-Cafe Digest
From Alan@LCS.MIT.EDU Thu May 3 21:25:45 2001
From: Alan@LCS.MIT.EDU (Alan Bawden)
Date: Thu, 3 May 2001 16:25:45 -0400 (EDT)
Subject: Interesting: "Lisp as a competitive advantage"
In-Reply-To: <20010503160105.DA219255FB@www.haskell.org>
(haskell-cafe-request@haskell.org)
References: <20010503160105.DA219255FB@www.haskell.org>
Message-ID: <3May2001.151316.Alan@LCS.MIT.EDU>
(Drat. Sorry for the duplicate message. I just learned a new Emacs
keystroke by accident... Ever notice how you never make a mistakes like
that unless the audience is very large?)
Subject: Re: Interesting: "Lisp as a competitive advantage"
Date: Thu, 03 May 2001 10:16:37 -0400
From: Norman Ramsey
> http://www.paulgraham.com/paulgraham/avg.html
...
When I compare Lisp and Haskell, the big question in my mind is this:
is lazy evaluation sufficient to make up for the lack of macros?
I would love to hear from a real Lisp macro hacker who has also done
lazy functional progrmaming.
The answer is: "almost". Simply having higher order functions eliminates a
lot of the need to macros. Common Lisp programmers could probably use a
lot fewer macros than they do in practice. Lazy evaluation eliminates
the need for another pile of macros. But there are still things you
need macros for.
Here's a macro I use in my Scheme code all the time. I write:
(assert (< x 3))
Which macro expands into:
(if (not (< x 3))
(assertion-failed '(< x 3)))
Where `assertion-failed' is a procedure that generates an appropriate error
message. The problem being solved here is getting the asserted expression
into that error message. I don't see how higher order functions or lazy
evaluation could be used to write an `assert' that behaves like this.
From dpt@math.harvard.edu Fri May 4 00:01:08 2001
From: dpt@math.harvard.edu (Dylan Thurston)
Date: Thu, 3 May 2001 19:01:08 -0400
Subject: Interesting: "Lisp as a competitive advantage"
In-Reply-To: <3May2001.151316.Alan@LCS.MIT.EDU>; from Alan@LCS.MIT.EDU on Thu, May 03, 2001 at 04:25:45PM -0400
References: <20010503160105.DA219255FB@www.haskell.org> <3May2001.151316.Alan@LCS.MIT.EDU>
Message-ID: <20010503190108.B1614@hum14.math.harvard.edu>
On Thu, May 03, 2001 at 04:25:45PM -0400, Alan Bawden wrote:
> Here's a macro I use in my Scheme code all the time. I write:
>
> (assert (< x 3))
>
> Which macro expands into:
>
> (if (not (< x 3))
> (assertion-failed '(< x 3)))
>
> Where `assertion-failed' is a procedure that generates an appropriate error
> message. The problem being solved here is getting the asserted expression
> into that error message. I don't see how higher order functions or lazy
> evaluation could be used to write an `assert' that behaves like this.
This is a good example, which cannot be implemented in
Haskell. "Exception.assert" is built in to the ghc compiler, rather than
being defined within the language. On the other hand, the built in
function gives you the source file and line number rather than the literal
expression; the macro can't do the former.
--Dylan Thurston
dpt@math.harvard.edu
From dankna@brain.mics.net Fri May 4 00:09:01 2001
From: dankna@brain.mics.net (Dan Knapp)
Date: Thu, 3 May 2001 18:09:01 -0500 (EST)
Subject: Interesting: "Lisp as a competitive advantage"
In-Reply-To: <20010503190108.B1614@hum14.math.harvard.edu>
Message-ID:
> > (if (not (< x 3))
> > (assertion-failed '(< x 3)))
>
> This is a good example, which cannot be implemented in
> Haskell. "Exception.assert" is built in to the ghc compiler, rather than
> being defined within the language. On the other hand, the built in
> function gives you the source file and line number rather than the literal
> expression; the macro can't do the former.
Yeah, it's a good example, but are there any other uses for such quoting?
If not, then implementing it as a builtin is perfectly adequate. (Not
trying to pick on Lisp; Lisp is great. Just hoping for more examples.)
| Dan Knapp, Knight of the Random Seed
| http://brain.mics.net/~dankna/
| ONES WHO DOES NOT HAVE TRIFORCE CAN'T GO IN.
From tim@galconn.com Fri May 4 01:09:25 2001
From: tim@galconn.com (Tim Sauerwein)
Date: Thu, 03 May 2001 17:09:25 -0700
Subject: Interesting: "Lisp as a competitive advantage"
References: <200105031416.f43EGbl30938@wally.eecs.harvard.edu>
Message-ID: <3AF1F335.F230981B@galconn.com>
Norman Ramsey wrote:
> I would love to hear from a real Lisp macro hacker who has also done
> lazy functional progrmaming.
I am such a person.
Lisp macros are a way to extend the Lisp compiler. Dylan's example
shows why this reflective power is sometimes useful. Here is another
example. I once wrote a macro to help express pattern-matching rules.
In these rules, variables that began with a question mark were treated
specially.
Having learned Haskell, I am not tempted to go back to Lisp. Yet I
occasionally wish for some sort of reflective syntactic extension.
- Tim Sauerwein
From elf@sandburst.com Fri May 4 02:08:09 2001
From: elf@sandburst.com (Mieszko Lis)
Date: Thu, 3 May 2001 21:08:09 -0400
Subject: Interesting: "Lisp as a competitive advantage"
In-Reply-To: <3AF1F335.F230981B@galconn.com>; from tim@galconn.com on Thu, May 03, 2001 at 05:09:25PM -0700
References: <200105031416.f43EGbl30938@wally.eecs.harvard.edu> <3AF1F335.F230981B@galconn.com>
Message-ID: <20010503210809.C12269@sandburst.com>
Tim Sauerwein wrote:
> I once wrote a macro to help express pattern-matching rules.
> In these rules, variables that began with a question mark were treated
> specially.
David Gifford's Programming Languages class at MIT uses Scheme+, a variant
of MIT Scheme with datatypes and pattern matching. These extensions are
implemented as macros. (http://tesla.lcs.mit.edu/6821)
But of course Haskell has those already :)
-- Mieszko
From uk1o@rz.uni-karlsruhe.de Fri May 4 09:00:40 2001
From: uk1o@rz.uni-karlsruhe.de (Hannah Schroeter)
Date: Fri, 4 May 2001 10:00:40 +0200
Subject: Interesting: "Lisp as a competitive advantage"
In-Reply-To: ; from dankna@brain.mics.net on Thu, May 03, 2001 at 06:09:01PM -0500
References: <20010503190108.B1614@hum14.math.harvard.edu>
Message-ID: <20010504100038.A16522@rz.uni-karlsruhe.de>
Hello!
On Thu, May 03, 2001 at 06:09:01PM -0500, Dan Knapp wrote:
> [...]
> Yeah, it's a good example, but are there any other uses for such quoting?
> If not, then implementing it as a builtin is perfectly adequate. (Not
> trying to pick on Lisp; Lisp is great. Just hoping for more examples.)
IMHO you can do all the things you'd do with separate preprocessing
steps for other languages with Lisp macros, inclusing scanner/parser
generating for some example. Or you could do the analogous thing
to camlp4 in Lisp with Lisp's own standard features (reader macros
+ normal macros).
You can e.g. also emulate Emacs Lisp in Common Lisp by slightly
hacking up the readtable and defining a few macros and functions
into a separate package. That's quite easy, in fact, the more complicated
part would be offering all those primitive functions of Emacs Lisp,
but if you had this, you could compile all those Emacs Lisp packages
into fast code. Imagine GNUs *not* crawling like a snail on a
Pentium 200 *g*
Kind regards,
Hannah.
From karczma@info.unicaen.fr Fri May 4 11:57:29 2001
From: karczma@info.unicaen.fr (Jerzy Karczmarczuk)
Date: Fri, 04 May 2001 12:57:29 +0200
Subject: Macros (Was: Interesting: "Lisp as a competitive advantage")
References: <200105031416.f43EGbl30938@wally.eecs.harvard.edu>
Message-ID: <3AF28B19.BC3B4B7D@info.unicaen.fr>
Discussion about macros, Lisp, laziness etc. Too many people to cite.
Alan Bawden uses macros to write assertions, and Dylan Thurston comments:
...
> > (assert (< x 3))
> >
> > Which macro expands into:
> >
> > (if (not (< x 3))
> > (assertion-failed '(< x 3)))
> >
> > Where `assertion-failed' is a procedure that generates an appropriate error
> > message. The problem being solved here is getting the asserted expression
> > into that error message. I don't see how higher order functions or lazy
> > evaluation could be used to write an `assert' that behaves like this.
>
> This is a good example, which cannot be implemented in
> Haskell. "Exception.assert" is built in to the ghc compiler, rather than
> being defined within the language. On the other hand, the built in
> function gives you the source file and line number rather than the literal
> expression; the macro can't do the former.
>
> --Dylan Thurston
In general this is not true, look at the macro preprocessing in C. If your
parser is kind enough to yield to the user some pragmatic information about
the read text, say, __LINE etc., you can code that kind of control with
macros as well.
Macros in Scheme are used to unfold n-ary control structures such as COND
into a hierarchy of IFs, etc. Nothing (in principle) to do with laziness
or HO functions. They are used also to define object-oriented layers in
Scheme or Lisp. I used them to emulate curryfied functions in Scheme.
I think that they are less than popular nowadays because they are dangerous,
badly structured, difficult to write "hygienically". Somebody (Erik Meijer?)
asked: "Don't you get dynamic scoping as well with macros?" Well, what is
dynamic here? Surely this is far from "fluid" bindings, this is a good
way to produce name trapping and other diseases.
In Clean there are macros. They are rather infrequently used...
In C++ a whole zone of macro/preprocessor coding began to disappear with
the arrival of inlining, templates, etc.
I think that macros belong to *low-level* languages. Languages where you
feel under the parsing surface the underlying virtual machine. You can
do fabulous things with. My favourite example is the language BALM, many
years before ML, Haskell etc., there was a functional, Lisp-like language
with a more classical, Algol-like syntax, with infix operators, etc.
The language worked on CDC mainframes, under SCOPE/NOS. Its processor was
written in assembler (Compass). But you should have a look on it imple-
mentation! Yes, assembler, nothing more. But this assembler was so macro-
oriented, and so incredibly powerful, that the instructions looked like
Lisp. With recursivity, parentheses, LET forms which allocated registers,
and other completely deliciously crazy constructs. In fact, the authors
used macros to implement the entire Lisp machine, used to process BALM
programs. //Side remark: don't ask me where to find BALM. I tried, I failed.
If *YOU* find it, let me know//
Another place where macros have been used as main horses was the MAINBOL
implementation of Snobol4. But when people started to implement Spitbol
etc. variants of Snobol4, they decided to use more structured, higher-level
approach (there was even an another, portable assembler with higher-level
instructions "embedded"; these avoided the usage of macros).
Jerzy Karczmarczuk
Caen, France
From Keith.Wansbrough@cl.cam.ac.uk Fri May 4 16:52:14 2001
From: Keith.Wansbrough@cl.cam.ac.uk (Keith Wansbrough)
Date: Fri, 04 May 2001 16:52:14 +0100
Subject: Macros (Was: Interesting: "Lisp as a competitive
advantage")
In-Reply-To: Message from Jerzy Karczmarczuk
of "Fri, 04 May 2001 12:57:29 +0200." <3AF28B19.BC3B4B7D@info.unicaen.fr>
Message-ID:
Jerzy Karczmarczuk writes:
> Macros in Scheme are used to unfold n-ary control structures such as COND
> into a hierarchy of IFs, etc. Nothing (in principle) to do with laziness
> or HO functions.
Isn't this exactly the reason that macros are less necessary in lazy languages?
In Haskell you can write
myIf True x y = x
myIf False x y = y
and then a function like
recip x = myIf (abs x < eps) 0 (1 / x)
works as expected. In Scheme,
(define myIf
(lambda (b x y)
(if b x y)))
does *not* have the desired behaviour! One can only write myIf using
macros, or by explicitly delaying the arguments.
--KW 8-)
From qrczak@knm.org.pl Fri May 4 17:05:12 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 4 May 2001 16:05:12 GMT
Subject: Macros (Was: Interesting: "Lisp as a competitive advantage")
References: <200105031416.f43EGbl30938@wally.eecs.harvard.edu> <3AF28B19.BC3B4B7D@info.unicaen.fr>
Message-ID:
Fri, 04 May 2001 12:57:29 +0200, Jerzy Karczmarczuk pisze:
> In Clean there are macros. They are rather infrequently used...
I think they roughly correspond to inline functions in Haskell.
They are separate in Clean because module interfaces are written
by hand, so the user can include something to be expanded inline in
other modules by making it a macro.
In Haskell module interfaces are generated by the compiler, so they
can contain unfoldings of functions worth inlining without explicit
distinguishing in the source.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From dpt@math.harvard.edu Fri May 4 21:16:29 2001
From: dpt@math.harvard.edu (Dylan Thurston)
Date: Fri, 4 May 2001 16:16:29 -0400
Subject: Implict parameters and monomorphism
In-Reply-To: ; from qrczak@knm.org.pl on Fri, May 04, 2001 at 07:56:24PM +0000
References: <200105040727.JAA26061@muppet30.cs.chalmers.se> <20010504112707.A474@mark.ugcs.caltech.edu>
Message-ID: <20010504161629.A3551@math.harvard.edu>
On Fri, May 04, 2001 at 07:56:24PM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> I would like to make pattern and result type signatures one-way
> matching, like in OCaml: a type variable just gives a name to the given
> part of the type, without constraining it any way - especially without
> "negative constraining", i.e. without yielding an error if it will
> be known more than that it's a possibly constrained type variable...
I'm not sure I understand here. One thing that occurred to me reading
your e-mail was that maybe the implicit universal quantification over
type variables is a bad idea, and maybe type variables should, by
default, have pattern matching semantics. Whether or not this is a
good idea abstractly, the way I imagine it, it would make almost all
existing Haskell code invalid, so it can't be what you're proposing.
Are you proposing that variables still be implicitly quantified in
top-level bindings, but that elsewhere they have pattern-matching
semantics?
Best,
Dylan Thurston
From Alan@LCS.MIT.EDU Fri May 4 21:19:06 2001
From: Alan@LCS.MIT.EDU (Alan Bawden)
Date: Fri, 4 May 2001 16:19:06 -0400 (EDT)
Subject: Macros
In-Reply-To: <20010504160104.E4025255AE@www.haskell.org>
(haskell-cafe-request@haskell.org)
References: <20010504160104.E4025255AE@www.haskell.org>
Message-ID: <4May2001.142441.Alan@LCS.MIT.EDU>
Date: Thu, 3 May 2001 18:09:01 -0500 (EST)
From: Dan Knapp
> > (if (not (< x 3))
> > (assertion-failed '(< x 3)))
>
> This is a good example, which cannot be implemented in
> Haskell. "Exception.assert" is built in to the ghc compiler, rather than
> being defined within the language. On the other hand, the built in
> function gives you the source file and line number rather than the literal
> expression; the macro can't do the former.
Yeah, it's a good example, but are there any other uses for such quoting?
There are a few. But this isn't the -only- reason to still use macros.
We could systematically go through all the macros I've written in the last
few years, and for each one we could figure out what language feature would
be needed in order to make that macro unnecessary. At the end of the
process you would have a larger programming language, but I still wouldn't be
convinced that we had covered all the cases.
A macro facility is like a pair of vise-grips (if you don't know what those
are, see http://www.technogulf.com/ht-vise.htm). You can do a lot of
things with a pair of vise-grips, although usually there's a better tool
for the job -- if you haven't got (say) a pipe-wrench, then a pair of
vise-grips can substitute. Now the more tools you have in your tool box,
the less often you will use your vise-grips. But no matter how bloated
your tool box becomes, you will still want to include a pair of vise-grips
for the unanticipated situation.
I have one problem with my own analogy: If you find yourself using your
vise-grips everyday for some task, you will probably soon go and purchase a
more appropriate tool. But I think that in many circumstances macros do
such a good job that I don't see the need to clutter up the language with
the special-prupose features needed to replace them.
Date: Fri, 04 May 2001 12:57:29 +0200
From: Jerzy Karczmarczuk
...
I think that they are less than popular nowadays because they are dangerous,
badly structured, difficult to write "hygienically"....
Indeed, you can screw up pretty badly with a pair of vise-grips! A friend
of mine used to say that programmers should have to pass some kind of
licensing test before they would be allowed to write Lisp/Scheme macros.
From qrczak@knm.org.pl Fri May 4 22:17:58 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 4 May 2001 21:17:58 GMT
Subject: Implict parameters and monomorphism
References: <200105040727.JAA26061@muppet30.cs.chalmers.se> <20010504112707.A474@mark.ugcs.caltech.edu> <20010504161629.A3551@math.harvard.edu>
Message-ID:
Fri, 4 May 2001 16:16:29 -0400, Dylan Thurston pisze:
> I'm not sure I understand here. One thing that occurred to me reading
> your e-mail was that maybe the implicit universal quantification over
> type variables is a bad idea, and maybe type variables should, by
> default, have pattern matching semantics.
Only for type signatures on patterns and results. It's a ghc/Hugs
extension. You can write:
f' arr :: ST s (a i e) = do
(marr :: STArray s i e) <- thaw arr
...
These type variables have the same scope as corresponding value
variables. The s,i,e in the type of marr refer to the corresponding
variables from the result of f'. You could bind i,e to new names in
marr, but not s. (Well, now I'm not sure why there is a difference...)
Type variables from the head of a class are also available in the
class scope in ghc.
You use bound type variables in ordinary type signatures on expressions
and let-bound variables in their scope. Unbound type variables in these
places are implicitly qualified by forall, I don't want to change this.
Some people think that type variables used in standard type signatures
(expressions and let-bound variables) should be available in the
appropriate scope. I don't have a strong opinion on that.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From jmaessen@mit.edu Sat May 5 00:31:21 2001
From: jmaessen@mit.edu (Jan-Willem Maessen)
Date: Fri, 4 May 2001 19:31:21 -0400 (EDT)
Subject: Macros
Message-ID: <200105042331.TAA05174@au-bon-pain.lcs.mit.edu>
Alan Bawden writes:
> A macro facility is like a pair of vise-grips (if you don't know what
> those are, see http://www.technogulf.com/ht-vise.htm).
I found myself laughing heartily at this apt analogy. I have heard
vice grips described as "the wrong tool for every job." (My own
experience with vice grips backs this up).
That being said, there are a number of things one might want out of a
macro facility, and I think they should be carefully distinguished:
1) The ability to name expressions without evaluating them, e.g. to
cook up a facsimile of laziness.
2) The ability to parrot source code (and maybe source position) back
at the user, e.g. Alan's assert macro, or its C equivalent.
3) The ability to create new binding constructs.
4) The ability to create new declaration constructs.
(1) is pretty well covered by lazy evaluation.
For (2), I wonder if a clever set of compiler-supplied implicit
parameters might do the trick---after all, "the position of expression
e" and "the source code of expression e" are dynamic notions that
could be carefully defined.
(3) is trickier. Contrast monadic code before and after "do" notation
was introduced. Haskell made it possible---not even very hard---to do
monadic binding, but there was a good deal of ugly syntactic noise.
The introduction of "do" notation eliminated that noise. My instinct
is that this isn't so easy for things that can't be shoehorned into a
monad. For example, I use the Utrecht attribute grammar tool, and
have trouble imagining how grammars could be coded in pure Haskell
while preserving nice naming properties.
(4) is harder still. Polytypic classes are a huge step in the right
direction. What I most long for, though, is the ability to synthesize
new types and new classes---not just simple instance declarations.
As you can probably guess, I think (3) and (4) are the most profitable
avenues of exploration. And I'm pretty sure I _don't_ want syntax
macros for these. I'm still waiting to be convinced what I do want.
-Jan-Willem Maessen
Eager Haskell project
jmaessen@mit.edu
From ru@river.org Sat May 5 12:44:15 2001
From: ru@river.org (Richard)
Date: Sat, 5 May 2001 04:44:15 -0700 (PDT)
Subject: Macros
In-Reply-To: <200105031416.f43EGbl30938@wally.eecs.harvard.edu>
References:
<3AEF5B9F.88D92A76@ia.nsc.com>
<200105031416.f43EGbl30938@wally.eecs.harvard.edu>
Message-ID: <200105051144.EAA10812@ohio.river.org>
Norman Ramsey writes:
>When I compare Lisp and Haskell, the big question in my mind is this:
>is lazy evaluation sufficient to make up for the lack of macros?
it might make sense for Haskell to have a facility that makes it
possible for the programmer to define new bits of syntactic sugar
without changing the compiler.
eg, I recently wanted
case2 foo of ...
as sugar for
foo >>= \output->
case output of ...
if you want to call such an easy-sugar-making facility a macro
facility, fine by me. personally, I wouldnt bother with such a
facility.
aside from simple macros that are just sugar, macros go against the
Haskell philosophy, imo, because macros do not obey as many formal laws
as functions do and because a macro is essentially a new language
construct. rather than building what is essentially a language with
dozens and dozens of constructs, the Haskell way is to re-use the 3
constructs of lambda calculus over and over again (with enough sugar to
keep things human-readable).
From fjh@cs.mu.oz.au Sat May 5 15:53:24 2001
From: fjh@cs.mu.oz.au (Fergus Henderson)
Date: Sun, 6 May 2001 00:53:24 +1000
Subject: Macros (Was: Interesting: "Lisp as a competitive advantage")
In-Reply-To:
References: <200105031416.f43EGbl30938@wally.eecs.harvard.edu> <3AF28B19.BC3B4B7D@info.unicaen.fr>
Message-ID: <20010506005324.C6194@hg.cs.mu.oz.au>
On 04-May-2001, Marcin 'Qrczak' Kowalczyk wrote:
> Jerzy Karczmarczuk pisze:
>
> > In Clean there are macros. They are rather infrequently used...
>
> I think they roughly correspond to inline functions in Haskell.
>
> They are separate in Clean because module interfaces are written
> by hand, so the user can include something to be expanded inline in
> other modules by making it a macro.
>
> In Haskell module interfaces are generated by the compiler, so they
> can contain unfoldings of functions worth inlining without explicit
> distinguishing in the source.
I don't think that Clean's module syntax is the reason.
(Or if it is the reason, then it is not a _good_ reason.)
After all, compilers for other languages where module interfaces
are explicitly written by the programmer, e.g. Ada and Mercury, are
still capable of performing intermodule inlining and other intermodule
optimizations if requested.
My guess is that the reason for having macros as a separate construct
is that there is a difference in operational semantics, specifically
with respect to lazyness, between macros and variable bindings.
However, this is just a guess; I don't know Clean very well.
--
Fergus Henderson | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: | -- the last words of T. S. Garp.
From Alan@LCS.MIT.EDU Sun May 6 11:15:26 2001
From: Alan@LCS.MIT.EDU (Alan Bawden)
Date: Sun, 6 May 2001 06:15:26 -0400 (EDT)
Subject: Macros
In-Reply-To: <20010505160104.3A2BB255C8@www.haskell.org>
(haskell-cafe-request@haskell.org)
References: <20010505160104.3A2BB255C8@www.haskell.org>
Message-ID: <6May2001.020817.Alan@LCS.MIT.EDU>
Date: Fri, 4 May 2001 19:31:21 -0400 (EDT)
From: Jan-Willem Maessen
Alan Bawden writes:
> A macro facility is like a pair of vise-grips (if you don't know what
> those are, see http://www.technogulf.com/ht-vise.htm).
I found myself laughing heartily at this apt analogy. I have heard
vice grips described as "the wrong tool for every job." (My own
experience with vice grips backs this up).
I considered including that well-known quip in the paragraph where I tried
to make it clear that I was -not- saying the same thing about macros.
(While looking for a good online picture of vise-grips I came across a
number of vise-grip horror stories -- the best was the guy who replaced the
steering wheel in his car with a pair of vise-grips! But I digress...)
So just to reiterate: the property of vise-grips I find analogous to a
macro facility is that the more -other- tools you have available, the less
often you need -this- one, nevertheless you still want this tool in your
tool-box.
That being said, there are a number of things one might want out of a
macro facility, and I think they should be carefully distinguished:
1) The ability to name expressions without evaluating them, e.g. to
cook up a facsimile of laziness.
2) The ability to parrot source code (and maybe source position) back
at the user, e.g. Alan's assert macro, or its C equivalent.
3) The ability to create new binding constructs.
4) The ability to create new declaration constructs.
I like this list. I'd love to see nice elegant programming language
features for doing all of these things. Not only would I have to write
fewer macros, but I'd probably be able to do many amazing -new- things.
(Lazyness, for example, doesn't just eliminate the need for a lot of
macros, it allows many new things that are beyond the reach of mere
macrology!)
I suspect, however, that even with everything on this list checked off, I'd
still want macros. Because I doubt that this list is exhaustive. The
example I pulled out of my hat led you to put item #2 on this list, but I
wonder if you would have thought of #2 if you didn't have my example before
you? If I had picked another example, I suspect this list would have
looked different.
- Alan
From qrczak@knm.org.pl Sun May 6 12:30:03 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 6 May 2001 11:30:03 GMT
Subject: Macros
References: <3AEF5B9F.88D92A76@ia.nsc.com> <200105031416.f43EGbl30938@wally.eecs.harvard.edu> <200105051144.EAA10812@ohio.river.org>
Message-ID:
Sat, 5 May 2001 04:44:15 -0700 (PDT), Richard pisze:
> eg, I recently wanted
>
> case2 foo of ...
>
> as sugar for
>
> foo >>= \output->
> case output of ...
Yes, often miss OCaml's 'function' and SML's 'fn' syntax which allow
dispatching without inventing a temporary name for the argument nor
the function.
Today I ran across exactly your case. In non-pure languages you would
just write 'case foo of'. I would be happy with just 'function':
get >>= function
... -> ...
... -> ...
I wonder if these parts of Haskell's syntax will stay forever or
there is a chance of some more syntactic sugar.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From chak@cse.unsw.edu.au Mon May 7 06:01:47 2001
From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty)
Date: Mon, 07 May 2001 15:01:47 +1000
Subject: Functional programming in Python
Message-ID: <20010507150147W.chak@cse.unsw.edu.au>
Two quite interesting articles about FP in Python are over
at IBM developerWorks:
http://www-106.ibm.com/developerworks/library/l-prog.html
http://www-106.ibm.com/developerworks/library/l-prog2.html
Two IMHO interesting things to note are the following:
* In Part 1, at the start, there is a bullet list of what
the author regards as FP "features". I found the
following interesting about this list:
- There is no mention of the emphasis placed on strong
typing in many modern functional languages.
- The author makes it sound as if FP can't handle
imperative features, whereas I would say that this is a
problem of the past and wasn't an issue in many FP
languages (Lisp, ML, ...) in the first place.
The opinion of the author is not really suprising, but I
think, it indicates a problem in how FP presents itself
to the rest of the world.
* In Part 2, the author writes at the end:
I have found it much easier to get a grasp of functional
programming in the language Haskell than in Lisp/Scheme
(even though the latter is probably more widely used, if
only in Emacs). Other Python programmers might
similarly have an easier time without quite so many
parentheses and prefix (Polish) operators.
I think, this is interesting, because both Lisp and Python
are dynammically typed. So, I would have expected the
strong type system to be more of a hurdle than Lisp's
syntax (or lack thereof).
Cheers,
Manuel
From karczma@info.unicaen.fr Mon May 7 11:38:31 2001
From: karczma@info.unicaen.fr (Jerzy Karczmarczuk)
Date: Mon, 07 May 2001 12:38:31 +0200
Subject: Macros (Was: Interesting: "Lisp as a competitive advantage")
References:
Message-ID: <3AF67B27.42396BAE@info.unicaen.fr>
Keith Wansbrough quotes :
>
> Jerzy Karczmarczuk writes:
>
> > Macros in Scheme are used to unfold n-ary control structures such as COND
> > into a hierarchy of IFs, etc. Nothing (in principle) to do with laziness
> > or HO functions.
>
> Isn't this exactly the reason that macros are less necessary in lazy languages?
> In Haskell you can write
> myIf True x y = x
> myIf False x y = y
>
> and then a function like
> recip x = myIf (abs x < eps) 0 (1 / x)
>
> works as expected. In Scheme,
>
> (define myIf
> (lambda (b x y)
> (if b x y)))
> does *not* have the desired behaviour! One can only write myIf using
> macros, or by explicitly delaying the arguments.
==========================
Well, my point was very different *here*. Lazy functions *may* play the role
of control structures (as your myIf in Haskell). In Scheme, with macros or
without macros you CANNOT implement "if". But you can write
(cond (cond1 seq1)
(cond2 seq2 etc)
...
(condN und so weiter) )
translated into
(if cond1 seq1
(if cond2 (begin seq2 etc)
(if ... )))
In the same way the LET* constructs are developed. And a multifunction
DEFINE.
So, here it is not the question of laziness, but of syntactic extensions.
(Of course, one can "cheat" implementing a user-defined IF as you proposed
above, delaying the last two arguments of this ternary operator, but the
decision which one will be evaluated, will pass through the "real" IF anyway.)
However, if you have already your primitive control structures (i.e. your
underlying machine), you can play with macros to implement lazy continuations,
backtracking, etc.Usually this is awkward, and less efficient than if done
at a more primitive level, where you can use easily unboxed data stored on
true stacks, execute real, fast branching, etc.
I am not sure about this vise-grip analogy. For me macros are façades, a way
to present a surface of the programm. The real work, where you need wrenches,
pipes, dynamite and Bible, all that is hidden behind.
Jerzy Karczmarczuk
Caen, France
From sperber@informatik.uni-tuebingen.de Mon May 7 09:53:44 2001
From: sperber@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor])
Date: 07 May 2001 10:53:44 +0200
Subject: Macros (Was: Interesting: "Lisp as a competitive advantage")
In-Reply-To: (Keith Wansbrough's message of "Fri, 04 May 2001 16:52:14 +0100")
References:
Message-ID:
>>>>> "Keith" == Keith Wansbrough writes:
Keith> Jerzy Karczmarczuk writes:
>> Macros in Scheme are used to unfold n-ary control structures such as COND
>> into a hierarchy of IFs, etc. Nothing (in principle) to do with laziness
>> or HO functions.
Keith> Isn't this exactly the reason that macros are less necessary in
Keith> lazy languages?
Keith> In Haskell you can write
Keith> myIf True x y = x
Keith> myIf False x y = y
Sure, but you're relying on pattern matching to implement the
semantics of myIf, which already is a generalized conditional. In
Scheme, where pattern matching is not primitive, you can get it
through a macro. The same holds for things like DO notation or list
comprehensions, where apparently lazy evaluation by itself doesn't
help implementing convenient syntax atop a more primitive underlying
notion.
--
Cheers =8-} Mike
Friede, Völkerverständigung und überhaupt blabla
From ronny@cs.kun.nl Mon May 7 10:14:22 2001
From: ronny@cs.kun.nl (Ronny Wichers Schreur)
Date: Mon, 07 May 2001 11:14:22 +0200
Subject: Macros
In-Reply-To: <20010506005324.C6194@hg.cs.mu.oz.au>
References:
<200105031416.f43EGbl30938@wally.eecs.harvard.edu>
<3AF28B19.BC3B4B7D@info.unicaen.fr>
Message-ID: <5.1.0.14.0.20010507101441.00b01008@localhost>
Marcin 'Qrczak' Kowalczyk schrijft:
>I think [Clean macros] roughly correspond to inline functions
>in Haskell.
That's right. I think the most important difference is that Clean
macros can also be used in patterns (if they don't have a lower
case name or contain local functions).
The INLINE pragma for GHC is advisory, macros in Clean will always
be substituted.
>They are separate in Clean because module interfaces are written
>by hand, so the user can include something to be expanded inline
>in other modules by making it a macro.
>In Haskell module interfaces are generated by the compiler, so
>they can contain unfoldings of functions worth inlining without
>explicit distinguishing in the source.
Fergus Henderson replies:
>I don't think that Clean's module syntax is the reason. (Or if it
>is the reason, then it is not a _good_ reason.) [...]
You're right, having hand written interfaces doesn't preclude
compiler written interfaces (or optimisation files). Let's call it
a pragmatic reason: Clean macros are there because we don't do any
cross-module optimisations and we do want some form of inlining.
Cheers,
Ronny Wichers Schreur
From simonmar@microsoft.com Mon May 7 14:14:51 2001
From: simonmar@microsoft.com (Simon Marlow)
Date: Mon, 7 May 2001 14:14:51 +0100
Subject: Macros
Message-ID: <9584A4A864BD8548932F2F88EB30D1C6115875@TVP-MSG-01.europe.corp.microsoft.com>
> Today I ran across exactly your case. In non-pure languages you would
> just write 'case foo of'. I would be happy with just 'function':
>=20
> get >>=3D function
> ... -> ...
> ... -> ...
Well, simply extending the Haskell syntax to allow
\ p11 .. p1n -> e1
..
pm1 .. pmn -> em
(with appropriate layout) should be ok, but I haven't tried it. Guarded
right-hand-sides could be allowed too.
Cheers,
Simon
From rossberg@ps.uni-sb.de Mon May 7 14:34:49 2001
From: rossberg@ps.uni-sb.de (Andreas Rossberg)
Date: Mon, 07 May 2001 15:34:49 +0200
Subject: Macros
References: <9584A4A864BD8548932F2F88EB30D1C6115875@TVP-MSG-01.europe.corp.microsoft.com>
Message-ID: <3AF6A479.5A49AA99@ps.uni-sb.de>
Simon Marlow wrote:
>
> Well, simply extending the Haskell syntax to allow
>
> \ p11 .. p1n -> e1
> ..
> pm1 .. pmn -> em
>
> (with appropriate layout) should be ok, but I haven't tried it. Guarded
> right-hand-sides could be allowed too.
Introducing layout after \ will break a lot of programs. For example,
consider the way >>= is often formatted:
f >>= \x ->
g >>= \y ->
...
I guess that's why Marcin suggested using a new keyword.
- Andreas
--
Andreas Rossberg, rossberg@ps.uni-sb.de
"Computer games don't affect kids.
If Pac Man affected us as kids, we would all be running around in
darkened rooms, munching pills, and listening to repetitive music."
From simonmar@microsoft.com Mon May 7 14:42:20 2001
From: simonmar@microsoft.com (Simon Marlow)
Date: Mon, 7 May 2001 14:42:20 +0100
Subject: Macros
Message-ID: <9584A4A864BD8548932F2F88EB30D1C6115876@TVP-MSG-01.europe.corp.microsoft.com>
> Simon Marlow wrote:
> >=20
> > Well, simply extending the Haskell syntax to allow
> >=20
> > \ p11 .. p1n -> e1
> > ..
> > pm1 .. pmn -> em
> >=20
> > (with appropriate layout) should be ok, but I haven't tried=20
> it. Guarded
> > right-hand-sides could be allowed too.
>=20
> Introducing layout after \ will break a lot of programs. For example,
> consider the way >>=3D is often formatted:
>=20
> f >>=3D \x ->
> g >>=3D \y ->
> ...
>=20
> I guess that's why Marcin suggested using a new keyword.
Ah yes, I forgot that lambda expressions are often used like that
(actually, I think that this use of the syntax is horrible, but that's
just MHO).
Cheers,
Simon
From mg169780@students.mimuw.edu.pl Mon May 7 16:48:47 2001
From: mg169780@students.mimuw.edu.pl (Michal Gajda)
Date: Mon, 7 May 2001 17:48:47 +0200 (CEST)
Subject: Macros[by implementor of toy compiler]
In-Reply-To: <4May2001.142441.Alan@LCS.MIT.EDU>
Message-ID:
On Fri, 4 May 2001, Alan Bawden wrote:
> (...)
> But I think that in many circumstances macros do
> such a good job that I don't see the need to clutter up the language with
> the special-prupose features needed to replace them.
> (...)
I'm currently making fun by writing compiler from eager lambda calculus
with hygienic macros to it's desugared form(when using parser combinators,
I estimate 2k lines of ML code), so I take freedom to share my
experiences. Working(simple and without typechecking, so only the
syntactic correctness is checked) version expected at the end of the
month.
* Introduction of general hygienic macro's as you propose, forces us to
cope with following problems:
1. Full typechecking of macros(in place of definition) seems to need
second-rank polymorphism. (Decidable, but harder to implement) Of course
you can delay typechecking until you expand all macros, but then
error-messages become unreadable.
[in Lisp it is non-issue]
2. Macros make the parsed grammar dynamic. Usually compiler has hard-coded
parser generated by LALR parser generator(like Happy or Yacc) compiled in.
Introducing each macro like you proposed would need(I think) generating
new parser(at least for the fragment of the grammar).
[In Lisp we just separate form with macro named head, and then apply
macro's semantic function to macro's body at COMPILE TIME. So:
a) the basic syntax is unchanged
b) we need to evaluate expressions at compile-time]
* On the other hand it yields following advantages:
1. Powerful enough to implement do-notation(for sure) and (probably) all
or most other to-core Haskell translations.
2. Lifts (unsurprisingly) the need to use cpp preprocessor when handling
compatibility issues.
Hope it helps :-) [and feel free to mail me if you are interested]
Michal Gajda
korek@icm.edu.pl
From Alan@LCS.MIT.EDU Tue May 8 08:15:06 2001
From: Alan@LCS.MIT.EDU (Alan Bawden)
Date: Tue, 8 May 2001 03:15:06 -0400 (EDT)
Subject: Macros[by implementor of toy compiler]
In-Reply-To:
(message from Michal Gajda on Mon, 7 May 2001 17:48:47 +0200 (CEST))
References:
Message-ID: <8May2001.005216.Alan@LCS.MIT.EDU>
Date: Mon, 7 May 2001 17:48:47 +0200 (CEST)
From: Michal Gajda
* Introduction of general hygienic macro's as you propose, forces us to
cope with following problems:
1. Full typechecking of macros(in place of definition) seems to need
second-rank polymorphism. (Decidable, but harder to implement) Of course
you can delay typechecking until you expand all macros, but then
error-messages become unreadable.
...
An interesting option is to allow the macro writer to supply his own
type-checker that is then run before macro-expansion. Type errors can then
be presented to the user using the pre-expansion source code.
I can hear you all boggling over the lack of safety in what I have just
proposed. Suppose the macro-writer provides a bogus type-checker? Ack!
Unsafe code! Mid-air collisions! Nuclear melt-downs! Am I nuts!?
Actually no. Just do a separate type-check of the fully expanded code
using only the built-in type-checkers. If this verification check comes up
with a different answer, then one of your macros has a buggy type-checker.
(What you do about finding -that- bug and reporting it to the author of the
macro is another issue...)
A student did a Masters thesis for Olin Shivers and me at MIT last year
trying to work out the kinks in a macro facility that would work
approximately this way. The results were interesting, but not yet ready
for prime time.
From Keith.Wansbrough@cl.cam.ac.uk Tue May 8 16:02:07 2001
From: Keith.Wansbrough@cl.cam.ac.uk (Keith Wansbrough)
Date: Tue, 08 May 2001 16:02:07 +0100
Subject: Macros[by implementor of toy compiler]
In-Reply-To: Message from Michal Gajda
of "Mon, 07 May 2001 17:48:47 +0200."
Message-ID:
> 2. Macros make the parsed grammar dynamic. Usually compiler has hard-coded
> parser generated by LALR parser generator(like Happy or Yacc) compiled in.
> Introducing each macro like you proposed would need(I think) generating
> new parser(at least for the fragment of the grammar).
Dylan has macros and a syntax more interesting than LISP's, so perhaps
it would be worth looking at how they handle it. I forget now, but I
don't think it was too difficult.
I also considered some of the issues we're discussing in an
unpublished paper that appears on my publications page,
http://www.cl.cam.ac.uk/~kw217/research/papers.html#Wansbrough99:Macros
(Section 8 looks briefly at Dylan-style macro facilities).
--KW 8-)
From brk@jenkon.com Tue May 8 17:53:07 2001
From: brk@jenkon.com (brk@jenkon.com)
Date: Tue, 8 May 2001 09:53:07 -0700
Subject: Functional programming in Python
Message-ID: <61AC3AD3E884D411836F0050BA8FE9F33551F3@franklin.jenkon.com>
Hi Manuel,
It's interesting to me to note the things that were interesting to
you. :-) I'm the author of the Xoltar Toolkit (including functional.py)
mentioned in those articles, and I have to agree with Dr. Mertz - I find
Haskell much more palatable than Lisp or Scheme. Many (most?) Python
programmers also have experience in more typeful languages (typically at
least C, since that's how one writes Python extension modules) so perhaps
that's not as surprising as it might seem.
Type inference (to my mind at least) fits the Python mindset very
well. I think most Python programmers would be glad to have strong typing,
so long as they don't have to press more keys to get it. If you have to
declare all your types up front, it just means more time spent changing type
declarations as the design evolves, but if the compiler can just ensure your
usage is consistent, that's hard to argue with.
As for the difficulty with imperative constructs, I agree it's not
even an issue for many (Dylan, ML, et. al.) languages, but for Haskell it
still is, in my humble opinion. I found the task of writing a simple program
that did a few simple imperative things inordinately difficult. I know about
the 'do' construct, and I understand the difference between >> and >>=. I've
read a book on Haskell, and implemented functional programming support for
Python, but trying to use Haskell to write complete programs still ties my
brain in knots. I see there are people writing complete, non-trivial
programs in Haskell, but I don't see how.
To be sure, I owe Haskell more of my time and I owe it to myself to
overcome this difficulty, but I don't think it's only my difficulty. In the
Haskell book I have, discussion of I/O is delayed until chapter 18, if
memory serves. One thing that might really help Haskell become more popular
is more documentation which presents I/O in chapter 2 or 3. Clearly the
interesting part of a functional language is the beauty of stringing
together all these functions into a single, elegant expression, but an
introductory text would do well to focus on more immediate problems first.
People on this list and others often say that the main body of the program
is almost always imperative in style, but there's little demonstration of
that fact - most examples are of a purely functional nature.
Please understand I mean these comments in the most constructive
sense. I have the highest respect for Haskell and the folks who work with
it.
Bryn
> -----Original Message-----
> From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> Sent: Sunday, May 06, 2001 10:02 PM
> To: haskell-cafe@haskell.org
> Subject: Functional programming in Python
>
> Two quite interesting articles about FP in Python are over
> at IBM developerWorks:
>
> http://www-106.ibm.com/developerworks/library/l-prog.html
> http://www-106.ibm.com/developerworks/library/l-prog2.html
>
> Two IMHO interesting things to note are the following:
>
> * In Part 1, at the start, there is a bullet list of what
> the author regards as FP "features". I found the
> following interesting about this list:
>
> - There is no mention of the emphasis placed on strong
> typing in many modern functional languages.
>
> - The author makes it sound as if FP can't handle
> imperative features, whereas I would say that this is a
> problem of the past and wasn't an issue in many FP
> languages (Lisp, ML, ...) in the first place.
>
> The opinion of the author is not really suprising, but I
> think, it indicates a problem in how FP presents itself
> to the rest of the world.
>
> * In Part 2, the author writes at the end:
>
> I have found it much easier to get a grasp of functional
> programming in the language Haskell than in Lisp/Scheme
> (even though the latter is probably more widely used, if
> only in Emacs). Other Python programmers might
> similarly have an easier time without quite so many
> parentheses and prefix (Polish) operators.
>
> I think, this is interesting, because both Lisp and Python
> are dynammically typed. So, I would have expected the
> strong type system to be more of a hurdle than Lisp's
> syntax (or lack thereof).
>
> Cheers,
> Manuel
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
From erik@meijcrosoft.com Tue May 8 22:54:24 2001
From: erik@meijcrosoft.com (Erik Meijer)
Date: Tue, 8 May 2001 14:54:24 -0700
Subject: Functional programming in Python
References: <61AC3AD3E884D411836F0050BA8FE9F33551F3@franklin.jenkon.com>
Message-ID: <004901c0d809$73164b00$34b11eac@redmond.corp.microsoft.com>
Interestingly enough, I have the same feeling with Python!
> As for the difficulty with imperative constructs, I agree it's not
> even an issue for many (Dylan, ML, et. al.) languages, but for Haskell it
> still is, in my humble opinion. I found the task of writing a simple
program
> that did a few simple imperative things inordinately difficult. I know
about
> the 'do' construct, and I understand the difference between >> and >>=.
I've
> read a book on Haskell, and implemented functional programming support for
> Python, but trying to use Haskell to write complete programs still ties my
> brain in knots. I see there are people writing complete, non-trivial
> programs in Haskell, but I don't see how.
From chak@cse.unsw.edu.au Wed May 9 08:56:39 2001
From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty)
Date: Wed, 09 May 2001 17:56:39 +1000
Subject: Functional programming in Python
In-Reply-To: <61AC3AD3E884D411836F0050BA8FE9F33551F3@franklin.jenkon.com>
References: <61AC3AD3E884D411836F0050BA8FE9F33551F3@franklin.jenkon.com>
Message-ID: <20010509175639H.chak@cse.unsw.edu.au>
brk@jenkon.com wrote,
> It's interesting to me to note the things that were interesting to
> you. :-) I'm the author of the Xoltar Toolkit (including functional.py)
> mentioned in those articles
Cool :-)
> and I have to agree with Dr. Mertz - I find
> Haskell much more palatable than Lisp or Scheme. Many (most?) Python
> programmers also have experience in more typeful languages (typically at
> least C, since that's how one writes Python extension modules) so perhaps
> that's not as surprising as it might seem.
Ok, but there are worlds between C's type system and
Haskell's.[1]
> Type inference (to my mind at least) fits the Python mindset very
> well.
So, how about the following conjecture? Types essentially
only articulate properties about a program that a good
programmer would be aware of anyway and would strive to
reinforce in a well-structured program. Such a programmer
might not have many problems with a strongly typed language.
Now, to me, Python has this image of a well designed
scripting language attracting the kind of programmer who
strives for elegance and well-structured programs. Maybe
that is a reason.
> I think most Python programmers would be glad to have strong typing,
> so long as they don't have to press more keys to get it. If you have to
> declare all your types up front, it just means more time spent changing type
> declarations as the design evolves, but if the compiler can just ensure your
> usage is consistent, that's hard to argue with.
Type inference (as opposed to mere type checking) is
certainly a design goal in Haskell.
> As for the difficulty with imperative constructs, I agree it's not
> even an issue for many (Dylan, ML, et. al.) languages, but for Haskell it
> still is, in my humble opinion. I found the task of writing a simple program
> that did a few simple imperative things inordinately difficult. I know about
> the 'do' construct, and I understand the difference between >> and >>=. I've
> read a book on Haskell, and implemented functional programming support for
> Python, but trying to use Haskell to write complete programs still ties my
> brain in knots. I see there are people writing complete, non-trivial
> programs in Haskell, but I don't see how.
>
> To be sure, I owe Haskell more of my time and I owe it to myself to
> overcome this difficulty, but I don't think it's only my difficulty. In the
> Haskell book I have, discussion of I/O is delayed until chapter 18, if
> memory serves. One thing that might really help Haskell become more popular
> is more documentation which presents I/O in chapter 2 or 3. Clearly the
> interesting part of a functional language is the beauty of stringing
> together all these functions into a single, elegant expression, but an
> introductory text would do well to focus on more immediate problems first.
Absolutely. In fact, you have just pointed out one of the
gripes that I have with most Haskell texts and courses. The
shunning of I/O in textbooks is promoting the image of
Haskell as a purely academic exercise. Something which is
not necessary at all, I am teaching an introductory course
with Haskell myself and did I/O in Week 5 out of 14 (these
are students without any previous programming experience).
Moreover, IIRC Paul Hudak's book
also introduces I/O early.
In other words, I believe that this a problem with the
presentation of Haskell and not with Haskell itself.
Cheers,
Manuel
[1] You might wonder why I am pushing this point. It is
just because the type system seems to be a hurdle for
some people who try Haskell. I am curious to understand
why it is a problem for some and not for others.
From C.T.McBride@durham.ac.uk Thu May 10 13:13:01 2001
From: C.T.McBride@durham.ac.uk (C T McBride)
Date: Thu, 10 May 2001 13:13:01 +0100 (BST)
Subject: argument permutation and fundeps
Message-ID:
Hi
This is a long message, containing a program which makes heavy use of
type classes with functional dependencies, and a query about how the
typechecker treats them. It might be a bit of an effort, but I'd be
grateful for any comment and advice more experienced Haskellers can
spare the time to give me. Er, this ain't no undergrad homework...
I'm a dependently typed programmer and a bit of an old
Lisphead. What's common to both is that it's fairly easy to write
`program reconstruction operations', like the argument permutation
function (of which flip is an instance), specified informally thus:
Given a permutation p on [1..n] and an n-ary function f,
permArg p f x_p(1) .. x_p(n) = f x_1 .. x_n
It's easy in Lisp because you can just compute the relevant
lambda-expression for permArg p f syntactically, then use it as a
program: of course, there's no type system to make sure your p is
really a perm and your f has enough arguments. To give permArg a
precise type, we have to compute an n-ary permutation of the argument
types for an n-ary function space. That is, we need a representation
of permutations p from which to compute
(1) the type of the permutation function
(2) the permutation function itself
This isn't that hard with dependent types, because we can use the same
kinds of data and program as easily at the level of types as we can at the
level of terms.
I'm relatively new to Haskell: what attracted me is that recent extensions
to the language have introduced a model of computation at the type level
via multi-parameter classes with functional dependencies. Although
type-level Haskell programming is separate from, different to, and not
nearly as powerful as term-level Haskell programming, it's still possible
to do some pretty interesting stuff: here's permArg as a Haskell program...
Although we can't compute with types over a data encoding of
permutations, we can use type classes to make `fake' datatypes one
level up. Here's a class representing the natural numbers. Each type
in the class has a single element, which we can use to tell the
typechecker which instance of the class we mean.
> class Nat n
> data O = O
> instance Nat O
> data (Nat n) => S n = S n
> instance (Nat n) => Nat (S n)
Now, for each n in Nat, we need a class Fin n with exactly n instances. A
choice of instance represents the position the first argument of an n-ary
function gets shifted to. We can make Fin (S n) by embedding Fin n with
one type constructor, FS, and chucking in a new type with another, FO.
> class (Nat n) => Fin n x | x -> n
> data (Nat n) => FO n = FO n
> instance (Nat n) => Fin (S n) (FO n)
> data (Nat n,Fin n x) => FS n x = FS n x
> instance (Nat n,Fin n x) => Fin (S n) (FS n x)
The class Factorial n contains n! types---enough to represent the
permutations. It's computed in the traditional way, but this time the
product is cartesian...
> class (Nat n) => Factorial n p | p -> n
> instance Factorial O ()
> instance (Factorial n p,Fin (S n) x) => Factorial (S n) (x,p)
The operation InsertArg n x r s, where x is in Fin (S n), takes an
n-ary function type s and inserts an extra argument type r at
whichever of the (S n) positions is selected by x. The corresponding
function, insertArg permutes a (S n)-ary function in r -> s by
flipping until the first argument has been moved to the nominated
position. FS codes `flip the argument further in and keep going'; FO
codes `stop flipping'.
> class (Nat n,Fin (S n) x) =>
> InsertArg n x r s t | x -> n, x r s -> t, x t -> r s where
> insertArg :: x -> (r -> s) -> t
> instance (Nat n) => InsertArg n (FO n) r s (r -> s) where
> insertArg (FO _) f = f
> instance (Nat n,Fin (S n) x,InsertArg n x r s t) =>
> InsertArg (S n) (FS (S n) x) r (a -> s) (a -> t) where
> insertArg (FS _ x) f a = insertArg x (flip f a)
PermArg simply works its way down the factorial, performing the InsertArg
indicated at each step. permArg is correspondingly built with insertArg.
> class (Nat n,Factorial n p) =>
> PermArg n p s t | p s -> t, p t -> s where
> permArg :: p -> s -> t
> instance PermArg O () t t where
> permArg () t = t
> instance (InsertArg n x r t u,PermArg n p s t) =>
> PermArg (S n) (x,p) (r -> s) u where
> permArg (x,p) f = insertArg x (\r -> permArg p (f r))
This code is accepted by the Feb 2000 version of Hugs, of course with
-98 selected.
Let's look at some examples: the interesting thing is how the typechecker
copes. Here's the instance of permArg which corresponds to flip
Main> permArg (FS (S O) (FO O),(FO O,()))
ERROR: Unresolved overloading
*** Type : (InsertArg (S O) (FS (S O) (FO O)) a b c,
InsertArg O (FO O) d e b,
PermArg O () f e)
=> (a -> d -> f) -> c
*** Expression : permArg (FS (S O) (FO O),(FO O,()))
OK, I wasn't expecting that to work, because I didn't tell it the type
of the function to permute: however, the machine did figure it out. Look
at where it got stuck: I'd have hoped it would compute that e is f, and hence
that b is d -> f, then possibly even that c is d -> a -> f. Isn't that
what the functional dependencies say?
On the other hand, if I tell it the answer, it's fine.
Main> :t permArg (FS (S O) (FO O),(FO O,())) :: (a -> b -> c) -> b -> a -> c
permArg (FS (S O) (FO O),(FO O,())) :: (a -> b -> c) -> b -> a -> c
First, a nice little monomorphic function.
> elemChar :: Char -> [Char] -> Bool
> elemChar = elem
There's no doubt about the type of the input, but this happens:
Main> permArg (FS (S O) (FO O),(FO O,())) elemChar
ERROR: Unresolved overloading
*** Type : (InsertArg (S O) (FS (S O) (FO O)) Char a b,
InsertArg O (FO O) [Char] c a,
PermArg O () Bool c)
=> b
*** Expression : permArg (FS (S O) (FO O),(FO O,())) elemChar
What am I doing wrong? I hope my program says that c is Bool, and so on.
Again, tell it the answer and it checks out:
Main> :t (permArg (FS (S O) (FO O),(FO O,())) elemChar)
:: [Char] -> Char -> Bool
permArg (FS (S O) (FO O),(FO O,())) elemChar :: [Char] -> Char -> Bool
Adding some arguments gives enough information about `the answer' to
get rid of the explicit typing.
Main> permArg (FS (S O) (FO O),(FO O,())) elemChar ['a','b','c'] 'b'
True :: Bool
It's the same story with arity 3...
Main> :t permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,()))) ::
(a -> b -> c -> d) -> (c -> a -> b -> d)
permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,()))) ::
(a -> b -> c -> d) -> c -> a -> b -> d
Main> permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,()))) foldl
ERROR: Unresolved overloading
*** Type : (InsertArg (S (S O)) (FS (S (S O)) (FO (S O))) (a -> b -> a) c d,
InsertArg (S O) (FS (S O) (FO O)) a e c,
InsertArg O (FO O) [b] f e,
PermArg O () a f)
=> d
*** Expression : permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,())))
foldl
Main> :t (permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,()))) ::
(a -> b -> c -> d) -> (c -> a -> b -> d)) foldl
permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,()))) foldl ::
[a] -> (b -> a -> b) -> b -> b
Main> permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,())))
foldl [1,2,3] (+) 0
6 :: Integer
So, am I failing to explain to Hugs why PermArg and InsertArg are programs,
despite the explicit functional dependencies, or is the typechecker just not
running them? It seems to be expanding PermArg's step case ok, but not
executing the base case, leaving InsertArg blocked. Can anyone shed some
light on the operational semantics of programming at the type level?
Having, said all that, I'm really impressed that even this much is possible
in Haskell. It's so nice to be able to write a type that says exactly what I
mean.
Cheers
Conor
From jeff@galconn.com Thu May 10 16:34:02 2001
From: jeff@galconn.com (Jeffrey R. Lewis)
Date: Thu, 10 May 2001 08:34:02 -0700
Subject: argument permutation and fundeps
References:
Message-ID: <3AFAB4E9.8F9D3EA2@galconn.com>
C T McBride wrote:
> Hi
>
> This is a long message, containing a program which makes heavy use of
> type classes with functional dependencies, and a query about how the
> typechecker treats them. It might be a bit of an effort, but I'd be
> grateful for any comment and advice more experienced Haskellers can
> spare the time to give me. Er, this ain't no undergrad homework...
Without delving too deeply into your example, it looks like you've bumped into a known bug in Hugs implementation of functional dependencies. You should try GHCI if you can - it doesn't suffer from this bug.
--Jeff
From mpj@cse.ogi.edu Thu May 10 17:02:41 2001
From: mpj@cse.ogi.edu (Mark P Jones)
Date: Thu, 10 May 2001 09:02:41 -0700
Subject: argument permutation and fundeps
In-Reply-To: <3AFAB4E9.8F9D3EA2@galconn.com>
Message-ID:
Hi Jeff,
| Without delving too deeply into your example, it looks like=20
| you've bumped into a known bug in Hugs implementation of=20
| functional dependencies. You should try GHCI if you can - it=20
| doesn't suffer from this bug.
Are there any plans to fix the bug in Hugs? (And is there
anywhere that the bug is documented?)
All the best,
Mark
From C T McBride Thu May 10 17:13:34 2001
From: C T McBride (C T McBride)
Date: Thu, 10 May 2001 17:13:34 +0100 (BST)
Subject: argument permutation and fundeps
In-Reply-To: <3AFAB4E9.8F9D3EA2@galconn.com>
Message-ID:
> C T McBride wrote:
>
> > Hi
> >
> > This is a long message, containing a program which makes heavy use of
> > type classes with functional dependencies, and a query about how the
> > typechecker treats them. It might be a bit of an effort, but I'd be
> > grateful for any comment and advice more experienced Haskellers can
> > spare the time to give me. Er, this ain't no undergrad homework...
Jeffrey R. Lewis:
>
> Without delving too deeply into your example, it looks like you've bumped
> into a known bug in Hugs implementation of functional dependencies. You
> should try GHCI if you can - it doesn't suffer from this bug.
Thanks for the tip! Our local Haskell supremo has pointed me to a version
I can run, and that has improved the situation... a bit.
Now I get
Perm> permArg (FS (S O) (FO O),(FO O,()))
No instance for `Show ((r -> s -> s) -> s -> r -> s)'
Obviously, there's no Show method, but I was expecting a more general
type. But if I tell it the right answer, it believes me
Perm> permArg (FS (S O) (FO O),(FO O,()))
:: (a -> b -> c) -> b -> a -> c
No instance for `Show ((a -> b -> t) -> b -> a -> t)'
The main thing is that I can permute a function correctly, without needing
an explicit signature:
Perm> permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,())))
foldl
No instance for `Show ([b] -> (t -> b -> t) -> t -> t)'
Still, those too-specific inferred types are disturbing me a little
Perm> permArg (FS (S (S O)) (FO (S O)),(FS (S O) (FO O),(FO O,())))
No instance for `Show ((r -> s -> s -> s) -> s -> r -> s -> s)'
The above examples show that my code works with the generality it needs
to. Is there some `defaulting' mechanism at work here for the inferred
types making all those s's the same?
But this is definitely progress!
Thanks
Conor
From kort@science.uva.nl Fri May 11 13:19:21 2001
From: kort@science.uva.nl (Jan Kort)
Date: Fri, 11 May 2001 14:19:21 +0200
Subject: sharing datatypes : best practice ?
References: <9818339E731AD311AE5C00902715779C0371A6CF@szrh00313.tszrh.csfb.com>
Message-ID: <3AFBD8C9.900BD754@wins.uva.nl>
"Taesch, Luc" wrote:
>
> do u isolate just the datatype, or a few related with, in a very small file (header like, i would say)
> or some basic accessor function with it ?
>
> isnt it leading to massiv quantities of small files ?
Asuming you have some typed AST with many mutually recursive
datatypes, I would keep them in one big file. This should be
fine if the datatypes are simple (no "deriving" Read and Show
etc.).
For an AST you don't want accessor functions: the datatypes
are the interface. For some datatypes you want to hide
the datatype and provide a function based interface, this
should be in the same file as the datatype.
Usually there is also some kind of asumed hierarchy in
datatypes, e.g. Int < List < FiniteMap, to determine where
functions operating on multiple datatypes should be
placed, but that's the same in OO.
Jan
From khaliff@astercity.net Fri May 11 17:30:27 2001
From: khaliff@astercity.net (Wojciech Moczydlowski, Jr)
Date: Fri, 11 May 2001 18:30:27 +0200 (CEST)
Subject: Arrays in Haskell, was: Re: Functional programming in Python
In-Reply-To: <004901c0d809$73164b00$34b11eac@redmond.corp.microsoft.com>
Message-ID:
On Tue, 8 May 2001, Erik Meijer wrote:
> Interestingly enough, I have the same feeling with Python!
Speaking of problems with Haskell, almost every time I write a larger
program, I'm frustrated with lack of efficient arrays/hashtables in the
standard. I know about ghc (I|U|M)Arrays for arrays and probably there
are hashtables implemented in Edison library, but the program's
portability would be lost and nhc/hugs would protest. I would be very
happy if Haskell developers could settle on a simple, not sophisticated
standard arrays.
I personally would like an interface like:
data Array type_of_objects_stored = ... -- abstract
data MArray a b = ... -- abstract
instance Monad (MArray a)
put :: Int -> a -> Array a -> MArray ()
get :: Array a -> MArray a
runMArray :: Int -> MArray a -> a -- int parameter is a size of used
array.
Even if they were put in IO, I still would not protest. Anything is better
than nothing.
Wojciech Moczydlowski, Jr
From mg169780@students.mimuw.edu.pl Fri May 11 22:48:47 2001
From: mg169780@students.mimuw.edu.pl (Michal Gajda)
Date: Fri, 11 May 2001 23:48:47 +0200 (CEST)
Subject: Arrays in Haskell
In-Reply-To:
Message-ID:
On Fri, 11 May 2001, Wojciech Moczydlowski, Jr wrote:
> data Array type_of_objects_stored = ... -- abstract
> data MArray a b = ... -- abstract
>
> put :: Int -> a -> Array a -> MArray a ()
Probably you meant:
put :: Int -> a -> MArray a ()
> get :: Array a -> MArray a a
get :: Int -> MArray a a
> runMArray :: Int -> MArray a -> a -- int parameter is a size of used
> array.
runMArray :: Int -> a -> MArray a b -> b
[first a is an initializer]
Greetings
Michal Gajda
korek@icm.edu.pl
From khaliff@astercity.net Fri May 11 22:20:12 2001
From: khaliff@astercity.net (Wojciech Moczydlowski, Jr)
Date: Fri, 11 May 2001 23:20:12 +0200 (CEST)
Subject: Arrays in Haskell
In-Reply-To:
Message-ID:
On Fri, 11 May 2001, Michal Gajda wrote:
> On Fri, 11 May 2001, Wojciech Moczydlowski, Jr wrote:
> Probably you meant:
> put :: Int -> a -> MArray a ()
> get :: Int -> MArray a a
> runMArray :: Int -> a -> MArray a b -> b
You're of course right. I should have reread my letter before sending.
> Michal Gajda
Wojciech Moczydlowski, Jr
From ralf@informatik.uni-bonn.de Sun May 13 09:44:19 2001
From: ralf@informatik.uni-bonn.de (Ralf Hinze)
Date: Sun, 13 May 2001 10:44:19 +0200
Subject: argument permutation and fundeps
References:
Message-ID: <3AFE4963.3F4ECBA2@informatik.uni-bonn.de>
Dear Conor,
thanks for posing the `argument permutation' problem. I had several
hours of fun hacking up a solution that works under `ghci' (as Jeff
pointed out Hugs probably suffers from a bug). The solution is
relatively
close to your program, I only simplified the representation of the
`factorial numbers' that select a permutation. The program is
attached below.
Here is a sample interaction (using `ghci -fglasgow-exts PermArg.lhs'):
PermArgs> :t pr
Int -> Bool -> Char -> [Char]
PPermArgs> :t perm (a0 <: a0 <: nil) pr
Int -> Bool -> Char -> [Char]
ermArgs> :t perm (a0 <: a1 <: nil) pr
Int -> Char -> Bool -> [Char]
PermArgs> :t perm (a1 <: a0 <: nil) pr
Bool -> Int -> Char -> [Char]
PermArgs> :t perm (a1 <: a1 <: nil) pr
Char -> Int -> Bool -> [Char]
PermArgs> :t perm (a2 <: a0 <: nil) pr
Bool -> Char -> Int -> [Char]
PermArgs> :t perm (a2 <: a1 <: nil) pr
Char -> Bool -> Int -> [Char]
Cheers, Ralf
---
> module PermArgs where
Natural numbers.
> data Zero = Zero
> data Succ nat = Succ nat
Some syntactic sugar.
> a0 = Zero
> a1 = Succ a0
> a2 = Succ a1
> a3 = Succ a2
Inserting first argument.
> class Insert nat a x y | nat a x -> y, nat y -> a x where
> insert :: nat -> (a -> x) -> y
>
> instance Insert Zero a x (a -> x) where
> insert Zero f = f
> instance (Insert nat a x y) => Insert (Succ nat) a (b -> x) (b -> y) where
> insert (Succ n) f a = insert n (flip f a)
Some test data.
> pr :: Int -> Bool -> Char -> String
> pr i b c = show i ++ show b ++ show c
An example session (using `ghci -fglasgow-exts PermArg.lhs'):
PermArgs> :t insert a0 pr
Int -> Bool -> Char -> String
PermArgs> :t insert a1 pr
Bool -> Int -> Char -> String
PermArgs> :t insert a2 pr
Bool -> Char -> Int -> [Char]
PermArgs> :t insert a3 pr
No instance for `Insert (Succ Zero) Int String y'
arising from use of `insert' at
Lists.
> data Nil = Nil
> data Cons nat list = Cons nat list
Some syntactic sugar.
> infixr 5 <:
> (<:) = Cons
> nil = a0 <: Nil
Permuting arguments.
> class Perm list x y | list x -> y, list y -> x where
> perm :: list -> x -> y
>
> instance Perm Nil x x where
> perm Nil f = f
> instance (Insert n a y z, Perm list x y) => Perm (Cons n list) (a -> x) z where
> perm (Cons d ds) f = insert d (\a -> perm ds (f a))
An example session (using `ghci -fglasgow-exts PermArg.lhs'):
PermArgs> :t perm (a0 <: a0 <: nil) pr
Int -> Bool -> Char -> [Char]
PermArgs> :t perm (a0 <: a1 <: nil) pr
Int -> Char -> Bool -> [Char]
PermArgs> :t perm (a1 <: a0 <: nil) pr
Bool -> Int -> Char -> [Char]
PermArgs> :t perm (a1 <: a1 <: nil) pr
Char -> Int -> Bool -> [Char]
PermArgs> :t perm (a2 <: a0 <: nil) pr
Bool -> Char -> Int -> [Char]
PermArgs> :t perm (a2 <: a1 <: nil) pr
Char -> Bool -> Int -> [Char]
PermArgs> :t perm (a2 <: a2 <: nil) pr
No instance for `Insert (Succ Zero) Int y y1'
arising from use of `perm' at
No instance for `Insert (Succ Zero) Bool [Char] y'
arising from use of `perm' at
From brk@jenkon.com Mon May 14 21:36:23 2001
From: brk@jenkon.com (brk@jenkon.com)
Date: Mon, 14 May 2001 13:36:23 -0700
Subject: Functional programming in Python
Message-ID: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
> -----Original Message-----
> From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> Sent: Wednesday, May 09, 2001 12:57 AM
> To: brk@jenkon.com
> Cc: haskell-cafe@haskell.org
> Subject: RE: Functional programming in Python
>
[Bryn Keller] [snip]
>
> > and I have to agree with Dr. Mertz - I find
> > Haskell much more palatable than Lisp or Scheme. Many (most?) Python
> > programmers also have experience in more typeful languages (typically at
> > least C, since that's how one writes Python extension modules) so
> perhaps
> > that's not as surprising as it might seem.
>
> Ok, but there are worlds between C's type system and
> Haskell's.[1]
>
[Bryn Keller]
Absolutely! C's type system is not nearly so powerful or unobtrusive
as Haskell's.
> > Type inference (to my mind at least) fits the Python mindset very
> > well.
>
> So, how about the following conjecture? Types essentially
> only articulate properties about a program that a good
> programmer would be aware of anyway and would strive to
> reinforce in a well-structured program. Such a programmer
> might not have many problems with a strongly typed language.
[Bryn Keller]
I would agree with this.
> Now, to me, Python has this image of a well designed
> scripting language attracting the kind of programmer who
> strives for elegance and well-structured programs. Maybe
> that is a reason.
[Bryn Keller]
This, too. :-)
[Bryn Keller] [snip]
> Absolutely. In fact, you have just pointed out one of the
> gripes that I have with most Haskell texts and courses. The
> shunning of I/O in textbooks is promoting the image of
> Haskell as a purely academic exercise. Something which is
> not necessary at all, I am teaching an introductory course
> with Haskell myself and did I/O in Week 5 out of 14 (these
> are students without any previous programming experience).
> Moreover, IIRC Paul Hudak's book
> also introduces I/O early.
>
> In other words, I believe that this a problem with the
> presentation of Haskell and not with Haskell itself.
>
> Cheers,
> Manuel
>
> [1] You might wonder why I am pushing this point. It is
> just because the type system seems to be a hurdle for
> some people who try Haskell. I am curious to understand
> why it is a problem for some and not for others.
[Bryn Keller]
Since my first mesage and your and Simon Peyton-Jones' response,
I've taken a little more time to work with Haskell, re-read Tackling the
Awkward squad, and browsed the source for Simon Marlow's web server, and
it's starting to feel more comfortable now. In the paper and in the server
souce, there is certainly a fair amount of IO work happening, and it all
looks fairly natural and intuitive.
Mostly I find when I try to write code following those examples (or
so I think!), it turns out to be not so easy, and the real difficulty is
that I can't even put my finger on why it's troublesome. I try many
variations on a theme - some work, some fail, and often I can't see why. I
should have kept all the versions of my program that failed for reasons I
didn't understand, but unfortunately I didn't... The only concrete example
of something that confuses me I can recall is the fact that this compiles:
main = do allLines <- readLines; putStr $ unlines allLines
where readLines = do
eof <- isEOF
if eof then return [] else
do
line <- getLine
allLines <- readLines
return (line : allLines)
but this doesn't:
main = do putStr $ unlines readLines
where readLines = do
eof <- isEOF
if eof then return [] else
do
line <- getLine
allLines <- readLines
return (line : allLines)
Evidently this is wrong, but my intuition is that <- simply binds a
name to a value, and that:
foo <- somefunc
bar foo
should be identical to:
bar somefunc
That was one difficulty. Another was trying to figure out what the $
sign was for. Finally I realized it was an alternative to parentheses,
necessary due to the extremely high precedence of function application in
Haskell. That high precedence is also disorienting, by the way. What's the
rationale behind it?
Struggling along, but starting to enjoy the aesthetics of Haskell,
Bryn
p.s. What data have your students' reactions given you about what is
and is not difficult for beginners to grasp?
From jcab@roningames.com Tue May 15 04:26:21 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Mon, 14 May 2001 20:26:21 -0700
Subject: Things and limitations...
In-Reply-To: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com
>
Message-ID: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Hi. First of all, I'm new to Haskell, so greetings to all listeners.
And I come from the oh-so-ever-present world of C/C++ and such. Thinking in
Haskell is quite different, so if you see me thinking in the wrong way,
please, do point it out.
That said, I want to make it clear that I'm seriously trying to
understand the inner workings of a language like Haskell (I've already
implemented a little toy lazy evaluator, which was great in understanding
how this can all work).
Only recently have I come to really understand (I hope) what a monad is
(and it was extremely elusive for a while). It was a real struggle for a
while :-)
At 01:36 PM 5/14/2001 -0700, Bryn Keller wrote:
>The only concrete example
>of something that confuses me I can recall is the fact that this compiles:
>
> main = do allLines <- readLines; putStr $ unlines allLines
> where readLines = do
> eof <- isEOF
> if eof then return [] else
> do
> line <- getLine
> allLines <- readLines
> return (line : allLines)
>
> but this doesn't:
>
> main = do putStr $ unlines readLines
> where readLines = do
> eof <- isEOF
> if eof then return [] else
> do
> line <- getLine
> allLines <- readLines
> return (line : allLines)
>
> Evidently this is wrong, but my intuition is that <- simply binds a
>name to a value, and that:
>
> foo <- somefunc
> bar foo
>
> should be identical to:
>
> bar somefunc
Yes. I'd even shorten that to:
--- Valid
readLines = do
eof <- isEOF
if eof then ...
---
as opposed to:
--- invalid
readLines = do
if isEOF then ...
---
The reason behind this is, evidently, due to the fact that the
do-notation is just a little bit of syntactic sugar for monads. It can't
"look into" the parameter to "if" to do the monad transfer. In fact, even
if it could look into the if, it wouldn't work without heavy processing. It
would need to do it EXACTLY in that manner (providing a hidden binding
before expression that uses the bound value).
And you'd still have lots of problems dealing with order of execution.
Just think of this example:
---
myfunction = do
if readChar > readChar then ...
---
our hypothetical smarter-do-notation would need to generate one of the
following:
---
myfunction = do
char1 <- readChar
char2 <- readChar
if char1 < char2 then ...
---
or:
---
myfunction = do
char2 <- readChar
char1 <- readChar
if char1 < char2 then ...
---
but which is the correct? In this case, you might want to define rules
saying that the first is 'obviously' the correct one. But with more complex
operations and expressions it might not be possible.
Or you might want to leave it ambiguous. But that is quite against the
spirit of Haskell, I believe.
In any case, forcing the programmer to be more explicit in these
matters is, I believe, a good thing. Same as not allowing circular
references between modules, for example.
Anyway... I have been toying a bit with Haskell lately, and I have
several questions:
First, about classes of heavily parametric types. Can't be done, I
believe. At least, I haven't been able to. What I was trying to do (as an
exercise to myself) was reconverting Graham Hutton and Erik Meijer's
monadic parser library into a class. Basically, I was trying to convert the
static:
---
newtype Parser a = P (String -> [(a,String)])
item :: Parser Char
force :: Parser a -> Parser a
first :: Parser a -> Parser a
papply :: Parser a -> String -> [(a,String)]
---
---
class (MonadPlus (p s v)) => Parser p where
item :: p s v v
force :: p s v a -> p s v a
first :: p s v a -> p s v a
papply :: p s v a -> s -> [(a,s)]
---
I have at home the actual code I tried to make work, so I can't just
copy/paste it, but it looked something like this. Anyway, this class would
allow me to define parsers that parse any kind of thing ('s', which was
'String' in the original lib), from which you can extract any kind of
element ('v', which was 'Char') and parse it into arbitrary types (the
original parameter 'a'). For example, with this you could parse, say, a
recursive algebraic data structure into something else.
Nhc98 wouldn't take it. I assume this is NOT proper Haskell. The
questions are: Is this doable? If so, how? Is this not recommendable? If
not, why?
I had an idea about how to make this much more palatable. It would be
something like:
---
class (MonadPlus p) => Parser p where
type Source
type Value
item :: p Value
force :: p a -> p a
first :: p a -> p a
papply :: p a -> Source -> [(a,Source)]
---
So individual instances of Parser would define the actual type aliases
Source and Value. Again, though, this is NOT valid Haskell.
Questions: Am I being unreasonable here? Why?
Ok, last, I wanted to alias a constructor. So:
---
module MyModule(Type, TypeCons) where
newtype Type = TypeCons Integer
instance SomeClass Type where
....
---
---
module Main where
import MyModule
newtype NewType = NewTypeCons Type
---
So, now, if I want to construct a NewType, I need to do something like:
---
kk = NewTypeCons (TypeCons 5)
---
And if I want to pattern-match a NewType value, I have to use both
constructors again. It's quite a pain. I've tried to make a constructor
that can do it in one shot, but I've been unable. Tried things like:
---
AnotherCons i = NewTypeCons (TypeCons i)
---
but nothing works. Again, the same questions: Is it doable? Am I being
unreasonable here?
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From jcab@roningames.com Tue May 15 04:46:20 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Mon, 14 May 2001 20:46:20 -0700
Subject: Databases
In-Reply-To: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
References: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com >
Message-ID: <4.3.2.7.2.20010514203929.02094d70@207.33.235.243>
Is there an efficient way to make simple databases in Haskell? I mean
something like a dictionary, hash table or associative container of some kind.
I'm aware that Haskell being pure functional means that those things
are not as easily implemented as they can be in other languages, in fact,
I've implemented a simple one myself, using a list of pairs (key,value)
(which means it's slow on lookup) and an optional monad to handle the
updates/lookups.
I guess what I'm wondering is what has been done in this respect. There
is no such thing in the standard library, as far as I can see, and my
search through the web has turned up nothing.
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From JAP97003@uconnvm.uconn.edu Tue May 15 05:12:19 2001
From: JAP97003@uconnvm.uconn.edu (Justin: Member Since 1923)
Date: Tue, 15 May 2001 00:12:19 EDT
Subject: Databases
Message-ID: <20010515040400.C09AB255AB@www.haskell.org>
>
> Is there an efficient way to make simple databases in Haskell? I
mean
>something like a dictionary, hash table or associative container of
some kind.
>
> I'm aware that Haskell being pure functional means that those
things
>are not as easily implemented as they can be in other languages, in
fact,
>I've implemented a simple one myself, using a list of pairs
(key,value)
>(which means it's slow on lookup) and an optional monad to handle the
>updates/lookups.
>
> I guess what I'm wondering is what has been done in this respect.
There
>is no such thing in the standard library, as far as I can see, and my
>search through the web has turned up nothing.
Chris Okasaki has developed a whole mess of purely function data
structures. He has a book:
http://www.cs.columbia.edu/~cdo/papers.html#cup98
Maybe this is what you're looking for?
HTH
-Justin
From Tom.Pledger@peace.com Tue May 15 05:50:02 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Tue, 15 May 2001 16:50:02 +1200
Subject: Things and limitations...
In-Reply-To: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
References: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
<4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Message-ID: <15104.46458.982417.425584@waytogo.peace.co.nz>
Juan Carlos Arevalo Baeza writes:
:
| First, about classes of heavily parametric types. Can't be done, I
| believe. At least, I haven't been able to. What I was trying to do (as an
| exercise to myself) was reconverting Graham Hutton and Erik Meijer's
| monadic parser library into a class. Basically, I was trying to convert the
| static:
|
| ---
| newtype Parser a = P (String -> [(a,String)])
| item :: Parser Char
| force :: Parser a -> Parser a
| first :: Parser a -> Parser a
| papply :: Parser a -> String -> [(a,String)]
| ---
|
| ---
| class (MonadPlus (p s v)) => Parser p where
| item :: p s v v
| force :: p s v a -> p s v a
| first :: p s v a -> p s v a
| papply :: p s v a -> s -> [(a,s)]
| ---
|
| I have at home the actual code I tried to make work, so I can't just
| copy/paste it, but it looked something like this. Anyway, this class would
| allow me to define parsers that parse any kind of thing ('s', which was
| 'String' in the original lib), from which you can extract any kind of
| element ('v', which was 'Char') and parse it into arbitrary types (the
| original parameter 'a'). For example, with this you could parse, say, a
| recursive algebraic data structure into something else.
|
| Nhc98 wouldn't take it. I assume this is NOT proper Haskell. The
| questions are: Is this doable? If so, how? Is this not recommendable? If
| not, why?
I did something similar recently, but took the approach of adding more
parameters to newtype Parser, rather than converting it into a class.
Here's how it begins:
type Indent = Int
type IL a = [(a, Indent)]
newtype Parser a m b = P (Indent -> IL a -> m (b, Indent, IL a))
instance Monad m => Monad (Parser a m) where
return v = P (\ind inp -> return (v, ind, inp))
(P p) >>= f = P (\ind inp -> do (v, ind', inp') <- p ind inp
let (P p') = f v
p' ind' inp')
fail s = P (\ind inp -> fail s)
instance MonadPlus m => MonadPlus (Parser a m) where
mzero = P (\ind inp -> mzero)
(P p) `mplus` (P q) = P (\ind inp -> (p ind inp `mplus` q ind inp))
item :: MonadPlus m => Parser a m a
item = P p
where
p ind [] = mzero
p ind ((x, i):inp)
| i < ind = mzero
| otherwise = return (x, ind, inp)
This differs from Hutton's and Meijer's original in these regards:
- It's generalised over the input token type: the `a' in
`Parser a m b' is not necessarily Char.
- It's generalised over the MonadPlus type in which the result is
given: the `m' in `Parser a m b' is not necessarily [].
- It's specialised for parsing with a layout rule: there's an
indentation level in the state, and each input token is expected
to be accompanied by an indentation level.
You could try something similar for your generalisations:
newtype Parser ct r = P (ct -> [(r, ct)])
-- ct: collection of tokens, r: result
instance SuitableCollection ct => Monad (Parser ct)
where ...
instance SuitableCollection ct => MonadPlus (Parser ct)
where ...
item :: Collects ct t => Parser ct t
force :: Parser ct r -> Parser ct r
first :: Parser ct r -> Parser ct r
papply :: Parser ct r -> ct -> [(r, ct)]
The `SuitableCollection' class is pretty hard to define, though.
Either it constrains its members to be list-shaped, or it prevents you
from reusing functions like `item'. Hmmm... I think I've just
stumbled across your reason for treating Parser as a class.
When the input isn't list-shaped, is the activity still called
parsing? Or is it a generalised fold (of the input type) and unfold
(of the result type)?
Regards,
Tom
From jcab@roningames.com Tue May 15 06:43:51 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Mon, 14 May 2001 22:43:51 -0700
Subject: Databases
In-Reply-To: <20010515040400.C09AB255AB@www.haskell.org>
Message-ID: <4.3.2.7.2.20010514224048.02094d70@207.33.235.243>
At 12:12 AM 5/15/2001 -0400, Justin: Member Since 1923 wrote:
> >something like a dictionary, hash table or associative container of
>some kind.
>
>Chris Okasaki has developed a whole mess of purely function data
>structures. He has a book:
>http://www.cs.columbia.edu/~cdo/papers.html#cup98
>
>Maybe this is what you're looking for?
I believe so! Thanx!
Geee... A red-black tree set in 30 lines of code... I hope it works :)
I've never done one of those myself, but it's said that they are as tricky
as they are efficient... What I'd use in C++ (STL map) is something pretty
close, and it's usually implemented as a R/B tree, so I think it'll work.
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From jcab@roningames.com Tue May 15 06:46:24 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Mon, 14 May 2001 22:46:24 -0700
Subject: Things and limitations...
In-Reply-To: <15104.46458.982417.425584@waytogo.peace.co.nz>
References: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
<61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
<4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Message-ID: <4.3.2.7.2.20010514224429.02094d70@207.33.235.243>
At 04:50 PM 5/15/2001 +1200, Tom Pledger wrote:
>I did something similar recently, but took the approach of adding more
>parameters to newtype Parser, rather than converting it into a class.
Yes, that's how I started.
>You could try something similar for your generalisations:
>
> newtype Parser ct r = P (ct -> [(r, ct)])
> -- ct: collection of tokens, r: result
This is EXACTLY how I started :)
> instance SuitableCollection ct => Monad (Parser ct)
> where ...
>
> instance SuitableCollection ct => MonadPlus (Parser ct)
> where ...
>
> item :: Collects ct t => Parser ct t
> force :: Parser ct r -> Parser ct r
> first :: Parser ct r -> Parser ct r
> papply :: Parser ct r -> ct -> [(r, ct)]
>
>The `SuitableCollection' class is pretty hard to define, though.
>Either it constrains its members to be list-shaped, or it prevents you
>from reusing functions like `item'. Hmmm... I think I've just
>stumbled across your reason for treating Parser as a class.
:) Maybe. Thanx for your help, though.
>When the input isn't list-shaped, is the activity still called
>parsing? Or is it a generalised fold (of the input type) and unfold
>(of the result type)?
Actually, I guess you can call it destructive pattern-matching.
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From chak@cse.unsw.edu.au Tue May 15 06:46:23 2001
From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty)
Date: Tue, 15 May 2001 15:46:23 +1000
Subject: Functional programming in Python
In-Reply-To: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
References: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
Message-ID: <20010515154623Y.chak@cse.unsw.edu.au>
brk@jenkon.com wrote,
> > From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> > Absolutely. In fact, you have just pointed out one of the
> > gripes that I have with most Haskell texts and courses. The
> > shunning of I/O in textbooks is promoting the image of
> > Haskell as a purely academic exercise. Something which is
> > not necessary at all, I am teaching an introductory course
> > with Haskell myself and did I/O in Week 5 out of 14 (these
> > are students without any previous programming experience).
> > Moreover, IIRC Paul Hudak's book
> > also introduces I/O early.
> >
> > In other words, I believe that this a problem with the
> > presentation of Haskell and not with Haskell itself.
>
> Since my first mesage and your and Simon Peyton-Jones' response,
> I've taken a little more time to work with Haskell, re-read Tackling the
> Awkward squad, and browsed the source for Simon Marlow's web server, and
> it's starting to feel more comfortable now. In the paper and in the server
> souce, there is certainly a fair amount of IO work happening, and it all
> looks fairly natural and intuitive.
>
> Mostly I find when I try to write code following those examples (or
> so I think!), it turns out to be not so easy, and the real difficulty is
> that I can't even put my finger on why it's troublesome. I try many
> variations on a theme - some work, some fail, and often I can't see why. I
> should have kept all the versions of my program that failed for reasons I
> didn't understand, but unfortunately I didn't... The only concrete example
> of something that confuses me I can recall is the fact that this compiles:
>
> main = do allLines <- readLines; putStr $ unlines allLines
> where readLines = do
> eof <- isEOF
> if eof then return [] else
> do
> line <- getLine
> allLines <- readLines
> return (line : allLines)
>
> but this doesn't:
>
> main = do putStr $ unlines readLines
> where readLines = do
> eof <- isEOF
> if eof then return [] else
> do
> line <- getLine
> allLines <- readLines
> return (line : allLines)
>
> Evidently this is wrong, but my intuition is that <- simply binds a
> name to a value, and that:
No, that is not the case. It does more, it executes an I/O action.
> foo <- somefunc
> bar foo
>
> should be identical to:
>
> bar somefunc
But it isn't; however, we have
do
let foo = somefunc
bar foo
is identical to
do
bar somefunc
So, this all boils down to the question, what is the
difference between
do
let foo = somefunc -- Version 1
bar foo
and
do
foo <- somefunc -- Version 2
bar foo
The short answer is that Version 2 (the arrow) executes any
side effects encoded in `somefunc', whereas Version 1 (the
let binding) doesn't do that. Expressions given as an
argument to a function behave as if they were let bound, ie,
they don't execute any side effects. This explains why the
identity that you stated above does not hold.
So, at the core is that Haskell insists on distinguishing
expressions that can have side effects from those that
cannot. This distinction makes the language a little bit
more complicated (eg, by enforcing us to distinguish between
`=' and `<-'), but it also has the benefit that both a
programmer and the compiler can immediately tell which
expressions do have side effects and which don't. For
example, this often makes it a lot easier to alter code
written by somebody else. It also makes it easier to
formally reason about code and it gives the compiler scope
for rather radical optimisations.
To reinforce the distinction, consider the following two
pieces of code (where `readLines' is the routine you defined
above):
do
let x = readLines
y <- x
z <- x
return (y ++ z)
and
do
x <- readLines
let y = x
let z = x
return (y ++ z)
How is the result (and I/O behaviour) different?
> That was one difficulty. Another was trying to figure out what the $
> sign was for. Finally I realized it was an alternative to parentheses,
> necessary due to the extremely high precedence of function application in
> Haskell. That high precedence is also disorienting, by the way. What's the
> rationale behind it?
You want to be able to write
f 1 2 + g 3 4
instead of
(f 1 2) + (g 3 4)
> p.s. What data have your students' reactions given you about what is
> and is not difficult for beginners to grasp?
They found it to be a difficult topic, but they found
"Unix/Shell scripts" even harder (and we did only simple
shell scripts). I actually made another interesting
observation (and keep in mind that for many that was their
first contact with programming). I had prepared for the
distinction between side effecting and non-side-effecting
expressions to be a hurdle in understanding I/O. What I
hand't taken into account was that the fact that they had
only worked in an interactive interpreter environment (as
opposed to, possibly compiled, standalone code) would pose
them a problem. The interactive interpreter had allowed
them to type in input and get results printed all way long,
so they didn't see why it should be necessary to complicate
a program with print statements.
I append the full break down of the student answers.
Cheers,
Manuel
-=-
Very difficult
Average
Very easy
Recursive functions
3.8%
16.1%
44.2%
25.2%
12.1%
List processing
5.2%
18%
44%
25.4%
8.8%
Pattern matching
3%
15.2%
41.4%
27.8%
14%
Association lists
4.5%
28.5%
48.5%
15.4%
4.5%
Polymorphism/overloading
10.9%
44.2%
37.8%
5.9%
2.6%
Sorting
5.7%
33.5%
47.6%
11.6%
3%
Higher-order functions
16.9%
43%
31.4%
8.5%
1.6%
Input/output
32.6%
39.7%
19.7%
7.3%
2.1%
Modules/decomposition
12.8%
37.1%
35.9%
12.1%
3.5%
Trees
29.5%
41.9%
21.9%
5.7%
2.6%
ADTs
35.9%
36.4%
20.9%
6.1%
2.1%
Unix/shell scripts
38.5%
34.7%
20.7%
5.7%
1.9%
Formal reasoning
11.1%
22.6%
31.9%
20.9%
15%
From brk@jenkon.com Tue May 15 18:16:14 2001
From: brk@jenkon.com (brk@jenkon.com)
Date: Tue, 15 May 2001 10:16:14 -0700
Subject: Functional programming in Python
Message-ID: <61AC3AD3E884D411836F0050BA8FE9F3355218@franklin.jenkon.com>
> -----Original Message-----
> From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> Sent: Monday, May 14, 2001 10:46 PM
> To: brk@jenkon.com
> Cc: haskell-cafe@haskell.org
> Subject: RE: Functional programming in Python
>
> brk@jenkon.com wrote,
>
> > > From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> > Evidently this is wrong, but my intuition is that <- simply binds a
> > name to a value, and that:
>
> No, that is not the case. It does more, it executes an I/O action.
>
[Bryn Keller] [snip]
> The short answer is that Version 2 (the arrow) executes any
> side effects encoded in `somefunc', whereas Version 1 (the
> let binding) doesn't do that. Expressions given as an
> argument to a function behave as if they were let bound, ie,
> they don't execute any side effects. This explains why the
> identity that you stated above does not hold.
>
> So, at the core is that Haskell insists on distinguishing
> expressions that can have side effects from those that
> cannot. This distinction makes the language a little bit
> more complicated (eg, by enforcing us to distinguish between
> `=' and `<-'), but it also has the benefit that both a
> programmer and the compiler can immediately tell which
> expressions do have side effects and which don't. For
> example, this often makes it a lot easier to alter code
> written by somebody else. It also makes it easier to
> formally reason about code and it gives the compiler scope
> for rather radical optimisations.
>
[Bryn Keller]
Exactly the clarification I needed, thank you!
[Bryn Keller] [snip]
> > p.s. What data have your students' reactions given you about what is
> > and is not difficult for beginners to grasp?
>
> They found it to be a difficult topic, but they found
> "Unix/Shell scripts" even harder (and we did only simple
> shell scripts). I actually made another interesting
> observation (and keep in mind that for many that was their
> first contact with programming). I had prepared for the
> distinction between side effecting and non-side-effecting
> expressions to be a hurdle in understanding I/O. What I
> hand't taken into account was that the fact that they had
> only worked in an interactive interpreter environment (as
> opposed to, possibly compiled, standalone code) would pose
> them a problem. The interactive interpreter had allowed
> them to type in input and get results printed all way long,
> so they didn't see why it should be necessary to complicate
> a program with print statements.
[Bryn Keller]
Interesting!
Thanks for your help, and for sharing your students' observations. I
always knew shell scripting was harder than it ought to be. ;-)
Bryn
From mechvel@math.botik.ru Wed May 16 11:42:13 2001
From: mechvel@math.botik.ru (S.D.Mechveliani)
Date: Wed, 16 May 2001 14:42:13 +0400
Subject: algebra proposals
Message-ID:
It appears, there was a discussion on the library proposals,
by D.Thurston, maybe, others.
And people asked for my comments. My comments are as follows.
-----------------------------------------------------------------
(1) It is misleading to call this part of the standard library
`numeric':
`numeric classes', `numeric Prelude', `Num', and so on.
Matrices and polynomials match many of the corresponding instances
but hardly can be named `numeric' objects.
This is why BAL introduces the term Basic Algebra (items).
(2) The order of considering problems matters.
First we have to decide how to program algebra and then - whether
(and how) to change the standard. The idea of the first stage
forms the skeleton for the second.
(3) As I see, we cannot decide how to program algebra in Haskell.
Hence, it is better not to touch the standard - in order to save
effort and e-mail noise.
-----------------------------------------------------------------
Comments to (2)
---------------
Anyone pretending to improve the standard has either to write any
sensible algebraic Haskell application or to study attentively
(with computing various examples) any existing one.
A `sensible application' should show how to operate in parametric
domains, not only in domains like (Integer, Bool).
Example: polynomials in [x1..xn] with coefficients over a field
(like rational numbers) quotiented by several equations.
For example, the data
r = Residue (x+y) (Ideal [x^2+3, y^3-2])
represents the mathematical expression
squareRoot(-3) + cubicRoot(2)
The basis bas = [x^2+3, y^3-2] for I = Ideal [x^2+3, y^3-2]
can be computed as the first class data.
And for many instances
( may be Fractional, Field, ...)
the domain (type?) of r matches or does not match
depending on the value of bas.
Even if the application builds the domains only statically,
when the compiler knows statically all the intended values of bas,
Haskell would still meet difficulties.
On dependent types
------------------
Maybe, Haskell has to move to dependent types.
I am not sure, but probably, unsolvability does not matter.
If the compiler fails to prove the correctness condition (on p) for
a type T(p) within the given threshold d, the compiler reports
"cannot prove ...". The user has to add annotations that help to
prove this. Or to say "solve at run-time". Or to say "I postulate
this condition on p, put the correctness on me".
With kind regards,
-----------------
Serge Mechveliani
mechvel@botik.ru *** I am not in haskell-cafe list ***
From jcab@roningames.com Thu May 17 03:20:00 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Wed, 16 May 2001 19:20:00 -0700
Subject: Things and limitations...
In-Reply-To: <15104.46458.982417.425584@waytogo.peace.co.nz>
References: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
<61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
<4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Message-ID: <4.3.2.7.2.20010516190556.03cd57f0@207.33.235.243>
At 04:50 PM 5/15/2001 +1200, Tom Pledger wrote:
> | ---
> | class (MonadPlus (p s v)) => Parser p where
> | item :: p s v v
> | force :: p s v a -> p s v a
> | first :: p s v a -> p s v a
> | papply :: p s v a -> s -> [(a,s)]
> | ---
>
>[...]
>
>The `SuitableCollection' class is pretty hard to define, though.
>Either it constrains its members to be list-shaped, or it prevents you
>from reusing functions like `item'. Hmmm... I think I've just
>stumbled across your reason for treating Parser as a class.
>
>When the input isn't list-shaped, is the activity still called
>parsing? Or is it a generalised fold (of the input type) and unfold
>(of the result type)?
Well, it looks like Justin's answer to my "Databases" thread gave me
the clue. What I want is called "Multiple parameter classes". Okasaki's
code for implementing sets uses this extension to make a Set class. It
wouldn't compile with nhc98 either, so I tried Hugs, which does support it
if extensions are enabled, and has, in its documentation, a very nice
explanation of the tradeoffs that using this extension entails.
So, what I really want is something like:
class (MonadPlus (p s v)) => Parser p s v | s -> v where
item :: p s v v
force :: p s v a -> p s v a
first :: p s v a -> p s v a
papply :: p s v a -> s -> [(a,s)]
I haven't had the time to play with this yet, but it sounds promising...
In case anyone is interested in the Hugs documentation of the feature:
http://www.cse.ogi.edu/PacSoft/projects/Hugs/pages/hugsman/exts.html#exts
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From qrczak@knm.org.pl Thu May 17 08:31:23 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 17 May 2001 07:31:23 GMT
Subject: Things and limitations...
References: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Message-ID:
Mon, 14 May 2001 20:26:21 -0700, Juan Carlos Arevalo Baeza pisze:
> class (MonadPlus (p s v)) => Parser p where
> item :: p s v v
> force :: p s v a -> p s v a
> first :: p s v a -> p s v a
> papply :: p s v a -> s -> [(a,s)]
This MonadPlus superclass can't be written. The Parser class is
overloaded only on p and must work uniformly on s and v, which can
be expressed for functions (by using s and v as here: type variables
not mentioned elsewhere), but can't for superclasses. What you want
here is this:
class (forall s v. MonadPlus (p s v)) => Parser p where
which is not supported by any Haskell implementation (but I hope it
will: it's not the first case when it would be useful).
This should work on implementations supporting multiparameter type
classes (ghc and Hugs):
class (MonadPlus (p s v)) => Parser p s v where
item :: p s v v
force :: p s v a -> p s v a
first :: p s v a -> p s v a
papply :: p s v a -> s -> [(a,s)]
Well, having (p s v) in an argument of a superclass context is not
standard too :-( Haskell98 requires types here to be type variables.
It requires that each parser type is parametrized by s and v; a
concrete parser type with hardwired String can't be made an instance of
this class, unless wrapped in a type which provides these parameters.
The best IMHO solution uses yet another extension: functional dependencies.
class (MonadPlus p) => Parser p s v | p -> s v where
item :: p v
force :: p a -> p a
first :: p a -> p a
papply :: p a -> s -> [(a,s)]
Having a fundep allows to have methods which don't have v and s in
their types. The fundep states that a single parser parses only
one type of input and only one type of tokens, so the type will
be implicitly deduced from the type of parser itself, basing on
available instances.
Well, I think that s will always be [v], so it can be simplified thus:
class (MonadPlus p) => Parser p v | p -> v where
item :: p v
force :: p a -> p a
first :: p a -> p a
papply :: p a -> [v] -> [(a,[v])]
Without fundeps it could be split into classes depending on which
methods require v:
class (MonadPlus p) => BasicParser p where
force :: p a -> p a
first :: p a -> p a
class (BasicParser p) => Parser p v where
item :: p v
papply :: p a -> [v] -> [(a,[v])]
This differs from the fundep solution that sometimes an explicit type
constraint must be used to disambiguate the type of v, because the
declaration states that the same parser could parse different types
of values. Well, perhaps this is what you want and
item :: SomeConcreteParser Char
could give one character where
item :: SomeConcreteParser (Char,Char)
gives two? In any case using item in a way which doesn't tell which
item type to use is an error.
> Ok, last, I wanted to alias a constructor. So:
There is no such thing. A constructor can't be renamed. You would
have to wrap the type inside the constructor in a new constructor.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From karczma@info.unicaen.fr Thu May 17 09:40:07 2001
From: karczma@info.unicaen.fr (Jerzy Karczmarczuk)
Date: Thu, 17 May 2001 10:40:07 +0200
Subject: BAL paper available >> graphic libraries
References:
<3B015A65.C87801F5@info.unicaen.fr>
<20010515211402.B24804@math.harvard.edu>
<3B03748C.D65FDD19@info.unicaen.fr> <15107.31057.395890.996366@tcc2>
Message-ID: <3B038E67.CBC05F55@info.unicaen.fr>
[[Perhaps, if this thread continues, it is time to move to a nice -café
at the corner.]]
Timothy Docker:
> Has anyone considered writing a haskell wrapper for SDL - Simple
> Directmedia Layer at http://www.libsdl.org ?
>
> This is a cross platform library intended for writing games, and aims
> for a high performance, low level API. It would be interesting to see
> how clean a functional API could be built around such an imperative
> framework.
Ammmmm...... I didn't want to touch this issue, but, well, indeed, I had
a look on Sam Lantinga's SDL package some time ago (I believe, a new
version exists now). I know that Johnny Andersen (somewhere near DIKU,
plenty of craziness on his ÁNOQ pages...) produced a Standard ML bindings
for that. But reading this stuff I was simply scared to death!
Using simultaneously 6 compilers, passing through "C" code, etc. - my
impression was: OK, it should work. If somebody wants to write a game,
a concrete simulation in ML, it might help him.
But, on the other hand if you want to have a decent programming platform,
enabling you to write - even (or: especially) for pedagogical purposes
some graphic tools of universal character, say,
* a model ray tracer with the "native" scene description language (i.e.
the same language for the core implementation, and for the scene/object
description, and for its scripting (animation)),
* a generic texture generator and image processing toolbox with all such
stuff as relational algebra of images, math. morphology, etc.,
* a radiosity machine
* ... dozen of other projects ...
then without a common memory management, without rebuilding the - say -
Haskell runtime *with* SDL or other low-level graphics utilities, it
might be difficult to use.
The approach taken by Clean'ers, to have an "almost intrinsic" graphic
object-oriented IO layer, and building their Game Library upon it, seems
more reasonable, and at any rate it has more sex-appeal for those who
are truly interested in practical functional programming as such, and
not just in stacking some external goodies which integrate badly with the
language.
Bother, I realized that my main contributions to this list are just
complaining. Somebody could teleport me to a desert island, with
some computers, but without hoards of students just for 6 months?
Jerzy Karczmarczuk
Caen, France
From fidel@cs.chalmers.se Thu May 17 11:08:01 2001
From: fidel@cs.chalmers.se (Pablo E. Martinez Lopez)
Date: Thu, 17 May 2001 12:08:01 +0200
Subject: Things and limitations...
References: <4.3.2.7.2.20010514142647.01b9c798@207.33.235.243>
Message-ID: <3B03A301.6428DDAF@cs.chalmers.se>
I have made something very similar, and it worked.
That work is reported in the paper "Generic Parser Combinators"
published in the 2nd Latin-American Conference on Functional Programming
(CLaPF). You can download it from
ftp://sol.info.unlp.edu.ar/pub/papers/theory/fp/2ndCLaPF/Papers/mlopez2.ps.gz
There I have made Hutton's parsers, Fokker's parsers and Rojemo's
parsers instances of a class Parser that looks similar to what you have
attempted. But it uses multiparameter type clases.
Sadly, Swiestra's parsers cannot be made instances of this class,
because they are not monads.
I hope this will help you.
FF
Juan Carlos Arevalo Baeza wrote:
> First, about classes of heavily parametric types. Can't be done, I
> believe. At least, I haven't been able to. What I was trying to do (as an
> exercise to myself) was reconverting Graham Hutton and Erik Meijer's
> monadic parser library into a class. Basically, I was trying to convert the
> static:
>
> ---
> newtype Parser a = P (String -> [(a,String)])
> item :: Parser Char
> force :: Parser a -> Parser a
> first :: Parser a -> Parser a
> papply :: Parser a -> String -> [(a,String)]
> ---
>
> ---
> class (MonadPlus (p s v)) => Parser p where
> item :: p s v v
> force :: p s v a -> p s v a
> first :: p s v a -> p s v a
> papply :: p s v a -> s -> [(a,s)]
> ---
>
> I have at home the actual code I tried to make work, so I can't just
> copy/paste it, but it looked something like this. Anyway, this class would
> allow me to define parsers that parse any kind of thing ('s', which was
> 'String' in the original lib), from which you can extract any kind of
> element ('v', which was 'Char') and parse it into arbitrary types (the
> original parameter 'a'). For example, with this you could parse, say, a
> recursive algebraic data structure into something else.
>
> Nhc98 wouldn't take it. I assume this is NOT proper Haskell. The
> questions are: Is this doable? If so, how? Is this not recommendable? If
> not, why?
From kort@science.uva.nl Thu May 17 14:01:49 2001
From: kort@science.uva.nl (Jan Kort)
Date: Thu, 17 May 2001 15:01:49 +0200
Subject: BAL paper available >> graphic libraries
References:
<3B015A65.C87801F5@info.unicaen.fr>
<20010515211402.B24804@math.harvard.edu>
<3B03748C.D65FDD19@info.unicaen.fr> <15107.31057.395890.996366@tcc2> <3B038E67.CBC05F55@info.unicaen.fr>
Message-ID: <3B03CBBD.29BC7DC@wins.uva.nl>
Jerzy Karczmarczuk wrote:
>
> [[Perhaps, if this thread continues, it is time to move to a nice -café
> at the corner.]]
>
> Timothy Docker:
>
> > Has anyone considered writing a haskell wrapper for SDL - Simple
> > Directmedia Layer at http://www.libsdl.org ?
> >
> > This is a cross platform library intended for writing games, and aims
> > for a high performance, low level API. It would be interesting to see
> > how clean a functional API could be built around such an imperative
> > framework.
Yes, SDL is a very interesting library, I wrote a wrapper for
part of it a while ago:
http://www.science.uva.nl/~kort/sdlhs/
It's far from complete and I have no plans to work on it. But for
trying out graphics stuff it's very useful, mainly because you
can understand the whole thing, fix it if it breaks down: make
some change and recompile the whole wrapper in a minute or so.
>
> Ammmmm...... I didn't want to touch this issue, but, well, indeed, I had
> a look on Sam Lantinga's SDL package some time ago (I believe, a new
> version exists now). I know that Johnny Andersen (somewhere near DIKU,
> plenty of craziness on his ÁNOQ pages...) produced a Standard ML bindings
> for that. But reading this stuff I was simply scared to death!
> Using simultaneously 6 compilers, passing through "C" code, etc. - my
> impression was: OK, it should work. If somebody wants to write a game,
> a concrete simulation in ML, it might help him.
I tried to install tbat stuff a while ago, I spend 2
days cursing and after a friendly but unhelpful email
exchange I gave up.. I have no clue how to get that
stuff working. Sounded really cool though: an ML
implementation that is on average 3 times as fast as
SML/NJ, a lightweight C interface, support for
crosscompiling ML+SDL applications from Linux to Windows.
Maybe I should try one more time..
Jan
From korek@icm.edu.pl Sat May 19 18:06:42 2001
From: korek@icm.edu.pl (korek@icm.edu.pl)
Date: Sat, 19 May 2001 19:06:42 +0200 (CEST)
Subject: Is there version of Report merged with errata?
Message-ID:
It seems that errata for Haskell Report is independent document on the
www.haskell.org. My limited human perception forces me to ignore these
corrections completely. Has anybody got an updated(merged) version?
Greetings :-) (and thanks in advance)
Michal Gajda
korek@icm.edu.pl
From Sven.Panne@informatik.uni-muenchen.de Sat May 19 18:40:38 2001
From: Sven.Panne@informatik.uni-muenchen.de (Sven Panne)
Date: Sat, 19 May 2001 19:40:38 +0200
Subject: Is there version of Report merged with errata?
References:
Message-ID: <3B06B016.A5B09378@informatik.uni-muenchen.de>
korek@icm.edu.pl wrote:
> [...] Has anybody got an updated(merged) version?
http://research.microsoft.com/~simonpj/haskell98-revised/
Cheers,
Sven
From ketil@ii.uib.no Sun May 20 21:03:59 2001
From: ketil@ii.uib.no (Ketil Malde)
Date: 20 May 2001 22:03:59 +0200
Subject: Functional programming in Python
In-Reply-To: "Manuel M. T. Chakravarty"'s message of "Tue, 15 May 2001 15:46:23 +1000"
References: <61AC3AD3E884D411836F0050BA8FE9F3355212@franklin.jenkon.com>
<20010515154623Y.chak@cse.unsw.edu.au>
Message-ID:
"Manuel M. T. Chakravarty" writes:
> You want to be able to write
> f 1 2 + g 3 4
> instead of
> (f 1 2) + (g 3 4)
I do? Personally, I find it a bit confusing, and I still often get it
wrong on the first attempt. The good thing is that the rule is simple
to remember. :-)
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
From mpj@cse.ogi.edu Mon May 21 05:43:06 2001
From: mpj@cse.ogi.edu (Mark P Jones)
Date: Sun, 20 May 2001 21:43:06 -0700
Subject: HUGS error: Unresolved overloading
In-Reply-To: <000f01c0dec1$a01ca4a0$0100a8c0@CO3003288A>
Message-ID:
Hi David,
| Can anyone shed some light on the following error? Thanks in advance.
|=20
| isSorted :: Ord a =3D> [a] -> Bool
| isSorted [] =3D True
| isSorted [x] =3D True
| isSorted (x1:x2:xs)
| | x1 <=3D x2 =3D isSorted (x2:xs)
| | otherwise =3D False
I'm branching away from your question, but hope that you might find some
additional comments useful ... the last equation in your definition can
actually be expressed more succinctly as:
isSorted (x1:x2:xs) =3D (x1 <=3D x2) && isSorted (x2:xs)
This means exactly the same thing, but, in my opinion at least, is much
clearer. In fact there's a really nice way to redo the whole definition
in one line using standard prelude functions and no explicit recursion:
isSorted xs =3D and (zipWith (<=3D) xs (tail xs))
In other words: "When is a list xs sorted? If each element in xs is
less than or equal to its successor in the list (i.e., the corresponding
element in tail xs)."
I know this definition may look obscure and overly terse if you're new
to either (a) the prelude functions used here or (b) the whole style of
programming. But once you get used to it, the shorter definition will
actually seem much clearer than the original, focusing on the important
parts and avoiding unnecessary clutter.
I don't see many emails on this list about "programming style", so this
is something of an experiment. If folks on the list find it interesting
and useful, perhaps we'll see more. But if everybody else thinks this
kind of thing is a waste of space, then I guess this may be the last
such posting!
All the best,
Mark
From laszlo@ropas.kaist.ac.kr Mon May 21 07:34:35 2001
From: laszlo@ropas.kaist.ac.kr (Laszlo Nemeth)
Date: Mon, 21 May 2001 15:34:35 +0900 (KST)
Subject: HUGS error: Unresolved overloading
In-Reply-To:
References:
Message-ID: <200105210634.PAA14039@ropas.kaist.ac.kr>
Hi Mark,
> isSorted xs = and (zipWith (<=) xs (tail xs))
> In other words: "When is a list xs sorted? If each element in xs is
> less than or equal to its successor in the list (i.e., the corresponding
> element in tail xs)."
That's right ... under cbn! At the same time David's version with
explicit recursion is fine both in Hugs and in a strict language.
I recently started using Caml for day to day work and I get bitten
because of the 'lazy mindset' at least once a week. I am not in
disagreement with you over the style, but explicit recursion in this
case avoids the problem.
Cheers,
Laszlo
PS. Why not go all the way
and . uncurry (zipWith (<=)) . id >< tail . dup
with appropriate definitions for dup and >< (prod)?
From Malcolm.Wallace@cs.york.ac.uk Mon May 21 13:55:27 2001
From: Malcolm.Wallace@cs.york.ac.uk (Malcolm Wallace)
Date: Mon, 21 May 2001 13:55:27 +0100
Subject: PP with HaXml 1.02
In-Reply-To: <9818339E731AD311AE5C00902715779C0371A76D@szrh00313.tszrh.csfb.com>
Message-ID:
> Im proceeding with basic exploratory tests with HaXml (1.02, winhugs feb 2001)
>
> the pretty print return the closing < on the next line, like
>
> > >
> where i expect
>
>
>
>
>
> what am i doing wrong ?
You are not doing anything wrong. This is the intended behaviour.
It is very important that HaXml does not add whitespace that could be
interpreted as #PCDATA within an element, given that in many cases we
do not know whether the DTD might permit free text in this position.
Whitespace inside the tag is always ignored, so it is a safe way to
display indentation without changing the meaning of the document.
Regards,
Malcolm
From pk@cs.tut.fi Tue May 22 09:32:31 2001
From: pk@cs.tut.fi (Pertti =?iso-8859-1?Q?Kellom=E4ki?=)
Date: Tue, 22 May 2001 11:32:31 +0300
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org>
Message-ID: <3B0A241F.2F7E564A@cs.tut.fi>
> From: Ketil Malde
> "Manuel M. T. Chakravarty" writes:
> > You want to be able to write
>
> > f 1 2 + g 3 4
>
> > instead of
>
> > (f 1 2) + (g 3 4)
>
> I do? Personally, I find it a bit confusing, and I still often get it
> wrong on the first attempt.
Same here. A while back someone said something along the lines that people
come to Haskell because of the syntax. For me it is the other way around.
My background is in Scheme/Lisp, and I still find it irritating that I cannot
just say indent-sexp and the like in Emacs. It is the other properties of the
language that keep me using it. I also get irritated when I get
precedence wrong, so in fact I tend to write (f 1 2) + (g 2 3), which to
my eye conveys the intended structure much better and compiles at first try.
--
pertti
From olaf@cs.york.ac.uk Tue May 22 12:40:31 2001
From: olaf@cs.york.ac.uk (Olaf Chitil)
Date: Tue, 22 May 2001 12:40:31 +0100
Subject: 'lazy mindset' was: HUGS error: Unresolved overloading
References: <200105210634.PAA14039@ropas.kaist.ac.kr>
Message-ID: <3B0A502F.EF7771DA@cs.york.ac.uk>
Laszlo Nemeth wrote:
> > isSorted xs = and (zipWith (<=) xs (tail xs))
>
> > In other words: "When is a list xs sorted? If each element in xs is
> > less than or equal to its successor in the list (i.e., the corresponding
> > element in tail xs)."
>
> That's right ... under cbn! At the same time David's version with
> explicit recursion is fine both in Hugs and in a strict language.
>
> I recently started using Caml for day to day work and I get bitten
> because of the 'lazy mindset' at least once a week. I am not in
> disagreement with you over the style, but explicit recursion in this
> case avoids the problem.
I find this remark very interesting. It reminds me of the many people
who say that they want a call-by-value language that allows call-by-need
annotations, because you rarely need call-by-need. That depends very
much on what you mean by `need call-by-need'! Mark has given a nice
example how a function can be defined concisely, taking advantage of the
call-by-need language. I very much disagree with Lazlo's comment, that
you should use explicit recursion to make the definition valid for
call-by-value languages as well. You should take full advantage of the
expressive power of your call-by-need language. Otherwise, why not avoid
higher-order functions because they are not available in other
languages?
As Lazlo says, you need a 'lazy mindset' to use this style. It takes
time to develop this 'lazy mindset' (just as it takes time to lose it
again ;-). To help people understanding and learning this 'lazy
mindset', I'd really like to see more examples such as Mark's. If there
are more examples, I could collect them on haskell.org.
> PS. Why not go all the way
>
> and . uncurry (zipWith (<=)) . id >< tail . dup
>
> with appropriate definitions for dup and >< (prod)?
It is longer, uses more functions and doesn't make the algorithm any
clearer. IMHO point-free programming and categorical combinators are not
the way to obtain readable programs.
Mark uses few standard functions. His definition is more readable than
the recursive one, because it is shorter and because it turns implicit
control into explicit data structures.
Cheers,
Olaf
--
OLAF CHITIL,
Dept. of Computer Science, University of York, York YO10 5DD, UK.
URL: http://www.cs.york.ac.uk/~olaf/
Tel: +44 1904 434756; Fax: +44 1904 432767
From chak@cse.unsw.edu.au Tue May 22 14:54:48 2001
From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty)
Date: Tue, 22 May 2001 23:54:48 +1000
Subject: Functional programming in Python
In-Reply-To: <3B0A241F.2F7E564A@cs.tut.fi>
References: <20010521160104.A5990255C2@www.haskell.org>
<3B0A241F.2F7E564A@cs.tut.fi>
Message-ID: <20010522235448A.chak@cse.unsw.edu.au>
Pertti Kellom=E4ki wrote,
> > From: Ketil Malde
> > "Manuel M. T. Chakravarty" writes:
> > > You want to be able to write
> > =
> > > f 1 2 + g 3 4
> > =
> > > instead of
> > =
> > > (f 1 2) + (g 3 4)
> > =
> > I do? Personally, I find it a bit confusing, and I still often get =
it
> > wrong on the first attempt. =
> =
> Same here. A while back someone said something along the lines that pe=
ople
> come to Haskell because of the syntax. For me it is the other way arou=
nd.
> My background is in Scheme/Lisp, and I still find it irritating that I=
cannot
> just say indent-sexp and the like in Emacs. It is the other properties=
of the
> language that keep me using it. I also get irritated when I get
> precedence wrong, so in fact I tend to write (f 1 2) + (g 2 3), which =
to
> my eye conveys the intended structure much better and compiles at firs=
t try.
In languages that don't use curring, you would write =
f (1, 2) + g (2, 3) =
which also gives application precedence over infix
operators. So, I think, we can safely say that application
being stronger than infix operators is the standard
situation.
Nevertheless, the currying notation is a matter of habit.
It took me a while to get used to it, too (as did layout).
But now, I wouldn't want to miss them anymore. And as far
as layout is concerned, I think, the Python people have made
the same experience. For humans, it is quite natural to use
visual cues (like layout) to indicate semantics.
Cheers,
Manuel
From pk@cs.tut.fi Tue May 22 15:10:40 2001
From: pk@cs.tut.fi (Kellomaki Pertti)
Date: Tue, 22 May 2001 17:10:40 +0300
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org>
<3B0A241F.2F7E564A@cs.tut.fi> <20010522235448A.chak@cse.unsw.edu.au>
Message-ID: <3B0A7360.413DDEE9@cs.tut.fi>
"Manuel M. T. Chakravarty" wrote:
> In languages that don't use curring, you would write
> f (1, 2) + g (2, 3)
> which also gives application precedence over infix
> operators. So, I think, we can safely say that application
> being stronger than infix operators is the standard
> situation.
Agreed, though you must remember that where I come from there is no
precedence at all.
> And as far
> as layout is concerned, I think, the Python people have made
> the same experience. For humans, it is quite natural to use
> visual cues (like layout) to indicate semantics.
Two points: I have been with Haskell less than half a year, and already
I have run into a layout-related bug in a tool that produces Haskell
source.
This does not raise my confidence on the approach very much.
Second, to a Lisp-head like myself something like
(let ((a 0)
(b 1))
(+ a b))
does exactly what you say: it uses layout to indicate semantic. The
parentheses
are there only to indicate semantics to the machine, and to make it easy
for
tools to pretty print the expression in such a way that the layout
reflects
the semantics as seen by the machine.
But all this is not very constructive, because Haskell is not going to
change
into a fully parenthesized prefix syntax at my wish.
--
Pertti Kellom\"aki, Tampere Univ. of Technology, Software Systems Lab
From afie@cs.uu.nl Tue May 22 15:23:25 2001
From: afie@cs.uu.nl (Arjan van IJzendoorn)
Date: Tue, 22 May 2001 16:23:25 +0200
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org> <3B0A241F.2F7E564A@cs.tut.fi> <20010522235448A.chak@cse.unsw.edu.au> <3B0A7360.413DDEE9@cs.tut.fi>
Message-ID: <010501c0e2ca$c36d1130$ec50d383@sushi>
> > For humans, it is quite natural to use
> > visual cues (like layout) to indicate semantics.
I agree, but let us not try to do that with just two (already overloaded)
symbols.
> (let ((a 0)
> (b 1))
> (+ a b))
let { a = 0; b = 1; } in a + b
is valid Haskell and the way I use the language. Enough and more descriptive
visual cues, I say.
Using layout is an option, not a rule (although the thing is called layout
rule...)
> But all this is not very constructive, because Haskell is not going to
> change into a fully parenthesized prefix syntax at my wish.
Thank god :-)
Arjan
From paul.hudak@yale.edu Tue May 22 15:36:05 2001
From: paul.hudak@yale.edu (Paul Hudak)
Date: Tue, 22 May 2001 10:36:05 -0400
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org>
<3B0A241F.2F7E564A@cs.tut.fi> <20010522235448A.chak@cse.unsw.edu.au> <3B0A7360.413DDEE9@cs.tut.fi>
Message-ID: <3B0A7955.5E1DEBB4@yale.edu>
> Two points: I have been with Haskell less than half a year, and already
> I have run into a layout-related bug in a tool that produces Haskell
> source.
Why not have your tool generate layout-less code? Surely that would be
easier to program, and be less error prone.
> Second, to a Lisp-head like myself something like
> (let ((a 0)
> (b 1))
> (+ a b))
> does exactly what you say: it uses layout to indicate semantic.
Yes, but the layout is not ENFORCED. I programmed in Lisp for many
years before switching to Haskell, and a common error is something like
this:
> (let ((a 0)
> (b 1)
> (+ a b)))
In this case the error is relatively easy to spot, but in denser code it
can be very subtle. So in fact using layout in Lisp can imply a
semantics that is simply wrong.
-Paul
From pk@cs.tut.fi Tue May 22 15:51:58 2001
From: pk@cs.tut.fi (Kellomaki Pertti)
Date: Tue, 22 May 2001 17:51:58 +0300
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org>
<3B0A241F.2F7E564A@cs.tut.fi> <20010522235448A.chak@cse.unsw.edu.au> <3B0A7360.413DDEE9@cs.tut.fi> <3B0A7955.5E1DEBB4@yale.edu>
Message-ID: <3B0A7D0E.FC77CC99@cs.tut.fi>
I realize this is a topic where it would be very easy to start a flame
war, but hopefully we can avoid that.
Paul Hudak wrote:
> Why not have your tool generate layout-less code? Surely that would be
> easier to program, and be less error prone.
The tool in question is Happy, and the error materialized as an
interaction
between the tool-generated parser code and the hand-written code in
actions.
So no, this was not an option since the tool is not written by me, and
given
my current capabilities in Haskell I could not even fix it. On the other
hand
the bug is easy to work around, and it might even be fixed in newer
versions
of Happy.
> Yes, but the layout is not ENFORCED. I programmed in Lisp for many
> years before switching to Haskell, and a common error is something like
> this:
>
> > (let ((a 0)
> > (b 1)
> > (+ a b)))
>
> In this case the error is relatively easy to spot, but in denser code it
> can be very subtle. So in fact using layout in Lisp can imply a
> semantics that is simply wrong.
Maybe I did not express my point clearly. What I was trying to say was
that
because of the syntax, it is very easy for M-C-q in Emacs to convert
that to
(let ((a 0)
(b 1)
(+ a b)))
which brings the layout of the source code to agreement with how it is
perceived
by the compiler/interpreter. So it is easy for me to enforce the layout.
This is not so much of an issue when you are writing the code in the
first place,
but I find it a pain to have to adjust indentation when I move bits of
code around
in an evolving program. If there is good support for that, then I'll
just shut up
an start using it. After all, I have only been using Haskell for a very
short
period of time.
--
pertti
From paul.hudak@yale.edu Tue May 22 15:58:35 2001
From: paul.hudak@yale.edu (Paul Hudak)
Date: Tue, 22 May 2001 10:58:35 -0400
Subject: Functional programming in Python
References: <20010521160104.A5990255C2@www.haskell.org>
<3B0A241F.2F7E564A@cs.tut.fi> <20010522235448A.chak@cse.unsw.edu.au> <3B0A7360.413DDEE9@cs.tut.fi> <3B0A7955.5E1DEBB4@yale.edu> <3B0A7D0E.FC77CC99@cs.tut.fi>
Message-ID: <3B0A7E9B.E31B9807@yale.edu>
> I realize this is a topic where it would be very easy to start a flame
> war, but hopefully we can avoid that.
No problem :-)
> Maybe I did not express my point clearly. What I was trying to say was
> that
> because of the syntax, it is very easy for M-C-q in Emacs to convert
> that to ...
Ok, I understand now. So clearly we just need better editing tools for
Haskell, which I guess is part of your point.
By the way, there are many Haskell programmers who prefer to write their
programs like this:
let { a = x
; b = y
; c = z
}
in ...
which arguably has its merits.
-Paul
From brk@jenkon.com Tue May 22 18:00:28 2001
From: brk@jenkon.com (brk@jenkon.com)
Date: Tue, 22 May 2001 10:00:28 -0700
Subject: Functional programming in Python
Message-ID: <61AC3AD3E884D411836F0050BA8FE9F335522A@franklin.jenkon.com>
> -----Original Message-----
> From: Manuel M. T. Chakravarty [SMTP:chak@cse.unsw.edu.au]
> Sent: Tuesday, May 22, 2001 6:55 AM
> To: pk@cs.tut.fi
> Cc: haskell-cafe@haskell.org
> Subject: Re: Functional programming in Python
>=20
> Pertti Kellom=E4ki wrote,
>=20
> > > From: Ketil Malde
> > > "Manuel M. T. Chakravarty" writes:
> > > > You want to be able to write
> > >=20
> > > > f 1 2 + g 3 4
> > >=20
> > > > instead of
> > >=20
> > > > (f 1 2) + (g 3 4)
> > >=20
> > > I do? Personally, I find it a bit confusing, and I still often =
get it
> > > wrong on the first attempt.=20
> >=20
> > Same here. A while back someone said something along the lines that
> people
> > come to Haskell because of the syntax. For me it is the other way
> around.
> > My background is in Scheme/Lisp, and I still find it irritating =
that I
> cannot
> > just say indent-sexp and the like in Emacs. It is the other =
properties
> of the
> > language that keep me using it. I also get irritated when I get
> > precedence wrong, so in fact I tend to write (f 1 2) + (g 2 3), =
which to
> > my eye conveys the intended structure much better and compiles at =
first
> try.
>=20
> In languages that don't use curring, you would write=20
>=20
> f (1, 2) + g (2, 3)=20
>=20
> which also gives application precedence over infix
> operators. So, I think, we can safely say that application
> being stronger than infix operators is the standard
> situation.
[Bryn Keller] =20
There's another piece to this question that we're overlooking, I
think. It's not just a difference (or lack thereof) in precedence, it's =
the
fact that parentheses indicate application in Python and many other
languages, and a function name without parentheses after it is a =
reference
to the function, not an application of it. This has nothing to do with
currying that I can see - you can have curried functions in Python, and =
they
still look the same. The main advantage I see for the Haskell style is
(sometimes) fewer keypresses for parentheses, but I still find it =
surprising
at times. Unfortunately in many cases you need to apply nearly as many
parens for a Haskell expression as you would for a Python one, but =
they're
in different places. It's not:
foo( bar( baz( x ) ) )
it's:
(foo ( bar (baz x) ) )
I'm not sure why folks thought this was an improvement. I suppose it
bears more resemblance to lambda calculus?
> Nevertheless, the currying notation is a matter of habit.
> It took me a while to get used to it, too (as did layout).
> But now, I wouldn't want to miss them anymore. And as far
> as layout is concerned, I think, the Python people have made
> the same experience. For humans, it is quite natural to use
> visual cues (like layout) to indicate semantics.
[Bryn Keller] =20
Absolutely. Once you get used to layout (Haskell style or Python
style), everything else looks like it was designed specifically to =
irritate
you. On the other hand, it's nice to have a brace-delimited style since =
that
makes autogenerating code a lot easier.
Bryn
> Cheers,
> Manuel
>=20
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
From heringto@cs.unc.edu Tue May 22 18:16:55 2001
From: heringto@cs.unc.edu (Dean Herington)
Date: Tue, 22 May 2001 13:16:55 -0400
Subject: Functional programming in Python
References: <61AC3AD3E884D411836F0050BA8FE9F335522A@franklin.jenkon.com>
Message-ID: <3B0A9F06.F2C28F58@cs.unc.edu>
brk@jenkon.com wrote:
> There's another piece to this question that we're overlooking, I
> think. It's not just a difference (or lack thereof) in precedence, it's the
> fact that parentheses indicate application in Python and many other
> languages, and a function name without parentheses after it is a reference
> to the function, not an application of it. This has nothing to do with
> currying that I can see - you can have curried functions in Python, and they
> still look the same. The main advantage I see for the Haskell style is
> (sometimes) fewer keypresses for parentheses, but I still find it surprising
> at times. Unfortunately in many cases you need to apply nearly as many
> parens for a Haskell expression as you would for a Python one, but they're
> in different places. It's not:
>
> foo( bar( baz( x ) ) )
> it's:
> (foo ( bar (baz x) ) )
>
> I'm not sure why folks thought this was an improvement. I suppose it
> bears more resemblance to lambda calculus?
In Haskell, one doesn't need to distinguish "a reference to the function" from
"an application of it". As a result, parentheses need to serve only a single
function, that of grouping. Parentheses surround an entire function
application, just as they surround an entire operation application:
foo (fum 1 2) (3 + 4)
I find this very consistent, simple, and elegant.
Dean
From qrczak@knm.org.pl Tue May 22 21:53:57 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 22 May 2001 20:53:57 GMT
Subject: 'lazy mindset' was: HUGS error: Unresolved overloading
References: <200105210634.PAA14039@ropas.kaist.ac.kr> <3B0A502F.EF7771DA@cs.york.ac.uk>
Message-ID:
I have a case where I don't know how to apply laziness well.
Consider mutable containers (e.g. STArray or a database on disk).
How to iterate over them? I see the following possibilities:
* Give up laziness, provide only a conversion to the list of items.
Since the conversion is monadic, it must construct the whole
list before it returns. Such conversion must be present anyway,
especially as this is the only way to ensure that we snapshot
consistent contents, and it can be argued that we usually don't
really need that much anything more sophisticated, and for some
cases we might get values by keys/indices generated in some lazy way.
* Use unsafeInterleaveIO and do a similar thing to hGetContents.
Easy to use but impure and dangerous if evaluation is not forced
early enough.
* Invent a lazy monadic sequence:
newtype Lazy m a = Lazy {getBegin :: m (Maybe (a, Lazy m a))}
(This can't be just
type Lazy m a = m (Maybe (a, Lazy m a))
because it would be a recursive type.)
It's generic: I can iterate over a collection and stop at any point.
But using it is not a pleasure. Some generic operations make sense
for this without changing the interface (filter, takeWhile), some
don't (filterM - it can't work for arbitrary monad, only for the
one used in the particular lazy sequence), and it's needed to have
separate versions of some operations, e.g.
filterLazy :: Monad m => (a -> m Bool) -> Lazy m a -> Lazy m a
which fits neither filter nor filterM. It artificially splits my
class design or requires methods implemented as error "sorry, not
applicable".
* Introduce a stateful iterator, like C++ or Java. This is ugly but
would work too. Probably just the simplest version, i.e.
getIter :: SomeMutableContainer m a -> m (m (Maybe a))
without going backwards etc.
Suggestions? I think I will do the last choice, after realizing that
Lazy m a is not that nice, but perhaps it can be done better...
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From Tom.Pledger@peace.com Tue May 22 23:23:10 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Wed, 23 May 2001 10:23:10 +1200
Subject: Recursive types?
In-Reply-To: <6605DE4621E5934593474F620A14D700190C19@red-msg-07.redmond.corp.microsoft.com>
References: <6605DE4621E5934593474F620A14D700190C19@red-msg-07.redmond.corp.microsoft.com>
Message-ID: <15114.59086.164856.212086@waytogo.peace.co.nz>
(Bailing out to haskell-cafe because this stuff is probably old hat)
David Bakin writes:
| Thank you.
|
| Now, on seeing your Tree example I tried to use it by defining
| height using structural recursion. But I couldn't find a valid
| syntax to pattern match - or otherwise define the function - on the
| type CT. Is there one?
No. You can generalise over some structures
ctHeight :: (CT c k e -> Tree c k e) -> CT c k e -> Int
ctHeight extract ct
= case extract ct of
Empty -> 0
Node l _ r -> let hl = ctHeight extract l
hr = ctHeight extract r
in 1 + max hl hr
plainHeight = ctHeight (\(Plain t) -> t)
labelledHeight = ctHeight snd
where the role of `extract' is a simplification of that of `phi' in
section 5.1 of the paper you mentioned (MPJ's Functional Programming
with Overloading and Higher-Order Polymorphism). I'm rapidly getting
out of my depth in bananas and lenses, so won't try to be more
specific about the similarity.
Anyway, the type of ctHeight is not appropriate for all possible
structures. For example, to follow an STRef you must visit an ST:
import ST
stHeight :: CT STRef s e -> ST s Int
stHeight ct
= do t <- readSTRef ct
case t of
Empty -> return 0
Node l _ r -> do hl <- stHeight l
hr <- stHeight r
return (1 + max hl hr)
- Tom
:
| -----Original Message-----
| From: Tom Pledger [mailto:Tom.Pledger@peace.com]
:
| One advantage of such higher-order types is reusability. For example,
| this
|
| type CT c k e
| = c k (Tree c k e) -- Contained Tree, container, key, element
| data Tree c k e
| = Empty
| | Node (CT c k e) e (CT c k e)
|
| can be used with no frills
|
| newtype Plain k t = Plain t
| ... :: Tree Plain () e
|
| or with every subtree labelled
|
| ... :: Tree (,) String e
|
| or with every subtree inside the ST monad
|
| import ST
| ... :: Tree STRef s e
From tweed@compsci.bristol.ac.uk Wed May 23 10:44:04 2001
From: tweed@compsci.bristol.ac.uk (D. Tweed)
Date: Wed, 23 May 2001 10:44:04 +0100 (BST)
Subject: Templates in FPL?
In-Reply-To:
Message-ID:
On 22 May 2001, Carl R. Witty wrote:
> "D. Tweed" writes:
>
> > In my experience the C++ idiom `you only pay for what you use' (==>
> > templates are essentially type-checked macros) and the fact most compilers
> > are evolved from C compilers makes working with templates a real pain in
> > practice.
>
> I'm not sure what you mean by type-checked here. Templates are not
> type-checked at definition time, but are type-checked when they are
> used; the same is true of ordinary macros.
I was thinking in terms of (to take a really simple example)
template
void
initialiseArray(T** arr,const T& elt,int bnd)
{
for(int i=0;i,), with
the template function I get the error when passing in the parameters; with
a macro the type error would appear at the indicated line, and indeed if
by pure chance bar just happened to have a member function called
realValue then I wouldn't get one at all. In my view it's reasonable
to say that template functions are type-checked in essentially the same
way as general functions, giving type errors in terms of the source
code that you're using, whereas there's no way to ignore the fact
macros dump a lot of code at the calling point in your program (mapping
certain identifiers to local identifiers) and then type-checking in
the context of the calling point. The above code is a really contrived
example, but I do genuinely find it useful that template type-error
messages are essentially normal function type-error messages.
[As an aside to Marcin, one of my vague plans (once I've been viva'd,
etc) is to have a go at writing some Haskell code that attempts to try and
simplify g++ template type errors, e.g., taking in a nasty screen long
error message and saying
`This looks like a list ---- list* mistake'
Like everything I plan to do it may not materialise though.]
___cheers,_dave________________________________________________________
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law: however many computers
email: tweed@cs.bris.ac.uk | you have, half your time is spent
work tel: (0117) 954-5250 | waiting for compilations to finish.
From fjh@cs.mu.oz.au Wed May 23 17:41:49 2001
From: fjh@cs.mu.oz.au (Fergus Henderson)
Date: Thu, 24 May 2001 02:41:49 +1000
Subject: Templates in FPL?
In-Reply-To:
References:
Message-ID: <20010524024149.B14295@hg.cs.mu.oz.au>
On 23-May-2001, D. Tweed wrote:
> On 22 May 2001, Carl R. Witty wrote:
>
> > "D. Tweed" writes:
> >
> > > In my experience the C++ idiom `you only pay for what you use' (==>
> > > templates are essentially type-checked macros) and the fact most compilers
> > > are evolved from C compilers makes working with templates a real pain in
> > > practice.
> >
> > I'm not sure what you mean by type-checked here. Templates are not
> > type-checked at definition time, but are type-checked when they are
> > used; the same is true of ordinary macros.
>
> I was thinking in terms of (to take a really simple example)
>
> template
> void
> initialiseArray(T** arr,const T& elt,int bnd)
...
> If I try and use intialiseArray(,), with
> the template function I get the error when passing in the parameters;
In other words, *calls to* template functions are type-checked at compile time.
However, *definitions of* template functions are only type-checked when they
are instantiated.
--
Fergus Henderson | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: | -- the last words of T. S. Garp.
From tweed@compsci.bristol.ac.uk Wed May 23 20:31:34 2001
From: tweed@compsci.bristol.ac.uk (D. Tweed)
Date: Wed, 23 May 2001 20:31:34 +0100 (BST)
Subject: Templates in FPL?
In-Reply-To: <20010524024149.B14295@hg.cs.mu.oz.au>
Message-ID:
On Thu, 24 May 2001, Fergus Henderson wrote:
> > > I'm not sure what you mean by type-checked here. Templates are not
> > > type-checked at definition time, but are type-checked when they are
> > > used; the same is true of ordinary macros.
> >
> > I was thinking in terms of (to take a really simple example)
> >
> > template
> > void
> > initialiseArray(T** arr,const T& elt,int bnd)
> ...
> > If I try and use intialiseArray(,), with
> > the template function I get the error when passing in the parameters;
>
> In other words, *calls to* template functions are type-checked at compile time.
> However, *definitions of* template functions are only type-checked when they
> are instantiated.
Umm... the point I was trying to make (in an inept way) was that the
type-check error messages that you get are in the context of the original
function that you physically wrote; whereas in the macro version
rather the error is in terms of the munged source code, which can make
abstracting it to the `original erroneous source line' difficult.
___cheers,_dave________________________________________________________
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law: however many computers
email: tweed@cs.bris.ac.uk | you have, half your time is spent
work tel: (0117) 954-5250 | waiting for compilations to finish.
From p.g.hancock@swansea.ac.uk Thu May 24 12:08:15 2001
From: p.g.hancock@swansea.ac.uk (Peter Hancock)
Date: Thu, 24 May 2001 12:08:15 +0100 (BST)
Subject: Functional programming in Python
Message-ID: <15116.60319.718276.135092@cspcas.swan.ac.uk>
Hi, you said
> Unfortunately in many cases you need to apply nearly as many
> parens for a Haskell expression as you would for a Python one, but
> they're in different places. It's not:
>
> foo( bar( baz( x ) ) )
> it's:
> (foo ( bar (baz x) ) )
Clearly the outer parentheses are unnecessary in the last expression.
One undeniable advantage of (f a) is it saves parentheses.
My feeling is that the f(a) (mathematical) notation works well when
type set or handwritten, but the (f a) (combinatory logic) notation
looks better with non-proportional fonts.
In a way the f(a) notation "represents things better": the f is at a
higher parenthesis level than the a.
Peter Hancock
From p.g.hancock@swansea.ac.uk Thu May 24 12:09:02 2001
From: p.g.hancock@swansea.ac.uk (Peter Hancock)
Date: Thu, 24 May 2001 12:09:02 +0100 (BST)
Subject: Functional programming in Python
Message-ID: <15116.60366.462586.685496@cspcas.swan.ac.uk>
Hi, you said
> Unfortunately in many cases you need to apply nearly as many
> parens for a Haskell expression as you would for a Python one, but
> they're in different places. It's not:
>
> foo( bar( baz( x ) ) )
> it's:
> (foo ( bar (baz x) ) )
Clearly the outer parentheses are unnecessary in the last expression.
One undeniable advantage of (f a) is it saves parentheses.
My feeling is that the f(a) (mathematical) notation works well when
type set or handwritten, but the (f a) (combinatory logic) notation
looks better with non-proportional fonts.
In a way the f(a) notation "represents things better": the f is at a
higher parenthesis level than the a.
Peter Hancock
From peterd@availant.com Thu May 24 14:07:44 2001
From: peterd@availant.com (Peter Douglass)
Date: Thu, 24 May 2001 09:07:44 -0400
Subject: Functional programming in Python
Message-ID: <8BDAB3CD0E67D411B02400D0B79EA49ADF593D@smail01.clam.com>
Peter Hancock wrote:
> > foo( bar( baz( x ) ) )
> > it's:
> > (foo ( bar (baz x) ) )
>
> Clearly the outer parentheses are unnecessary in the last expression.
> One undeniable advantage of (f a) is it saves parentheses.
Yes and no. In
( ( ( foo bar) baz) x )
the parens can be omitted to leave
foo bar baz x
but in ( foo ( bar (baz x) ) )
You would want the following I think.
foo . bar . baz x
which does have the parens omitted, but requires the composition operator.
--PeterD
From Tom.Pledger@peace.com Thu May 24 22:06:18 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Fri, 25 May 2001 09:06:18 +1200
Subject: Functional programming in Python
In-Reply-To: <8BDAB3CD0E67D411B02400D0B79EA49ADF593D@smail01.clam.com>
References: <8BDAB3CD0E67D411B02400D0B79EA49ADF593D@smail01.clam.com>
Message-ID: <15117.30666.600561.616048@waytogo.peace.co.nz>
Peter Douglass writes:
:
| but in ( foo ( bar (baz x) ) )
|
| You would want the following I think.
|
| foo . bar . baz x
|
| which does have the parens omitted, but requires the composition
| operator.
Almost. To preserve the meaning, the composition syntax would need to
be
(foo . bar . baz) x
or
foo . bar . baz $ x
or something along those lines. I favour the one with parens around
the dotty part, and tend to use $ only when a closing paren is
threatening to disappear over the horizon.
do ...
return $ case ... of
... -- many lines
Regards,
Tom
From alex@shop.com Fri May 25 18:54:35 2001
From: alex@shop.com (S. Alexander Jacobson)
Date: Fri, 25 May 2001 13:54:35 -0400 (Eastern Daylight Time)
Subject: Functional programming in Python
In-Reply-To: <15117.30666.600561.616048@waytogo.peace.co.nz>
Message-ID:
Does anyone know why the haskell designers did not make the syntax
right associative? It would clean up a lot of stuff.
Haskell Non-Haskell
Left Associative Right Associative
foo (bar (baz (x))) foo bar baz x
foo $ bar $ baz x foo bar baz x
add (square x) (square y) add square x square y
add (square x) y add square x y
------------From Prelude----------------------
map f x (map f) x
f x (n - 1) x f x n - 1 x
f x (foldr1 f xs) f x foldr1 f xs
showChar '[' . shows x . showl xs showChar '[] shows x showl xs
You just need to read from right to left accumulating a stack of
arguments. When you hit a function that can consume some arguments, it
does so. There is an error if you end up with more than one value on
the argument stack.
-Alex-
On Fri, 25 May 2001, Tom Pledger wrote:
> Peter Douglass writes:
> :
> | but in ( foo ( bar (baz x) ) )
> |
> | You would want the following I think.
> |
> | foo . bar . baz x
> |
> | which does have the parens omitted, but requires the composition
> | operator.
>
> Almost. To preserve the meaning, the composition syntax would need to
> be
>
> (foo . bar . baz) x
>
> or
>
> foo . bar . baz $ x
>
> or something along those lines. I favour the one with parens around
> the dotty part, and tend to use $ only when a closing paren is
> threatening to disappear over the horizon.
>
> do ...
> return $ case ... of
> ... -- many lines
>
> Regards,
> Tom
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___________________________________________________________________
S. Alexander Jacobson Shop.Com
1-646-638-2300 voice The Easiest Way To Shop (sm)
From zhanyong.wan@yale.edu Fri May 25 19:00:54 2001
From: zhanyong.wan@yale.edu (Zhanyong Wan)
Date: Fri, 25 May 2001 14:00:54 -0400
Subject: Functional programming in Python
References:
Message-ID: <3B0E9DD6.F16308C0@yale.edu>
"S. Alexander Jacobson" wrote:
>
> Does anyone know why the haskell designers did not make the syntax
> right associative? It would clean up a lot of stuff.
>
> Haskell Non-Haskell
> Left Associative Right Associative
> foo (bar (baz (x))) foo bar baz x
> foo $ bar $ baz x foo bar baz x
> add (square x) (square y) add square x square y
> add (square x) y add square x y
> ------------From Prelude----------------------
> map f x (map f) x
> f x (n - 1) x f x n - 1 x
> f x (foldr1 f xs) f x foldr1 f xs
> showChar '[' . shows x . showl xs showChar '[] shows x showl xs
>
> You just need to read from right to left accumulating a stack of
> arguments. When you hit a function that can consume some arguments, it
> does so. There is an error if you end up with more than one value on
> the argument stack.
Note that in your proposal,
add square x y
is parsed as
add (square x) y
instead of
add (square (x y)),
so it's not right associative either.
As you explained, the parse of an expression depends the types of the
sub-expressions, which imo is BAD. Just consider type inference...
-- Zhanyong
From alex@shop.com Fri May 25 22:25:42 2001
From: alex@shop.com (S. Alexander Jacobson)
Date: Fri, 25 May 2001 17:25:42 -0400 (Eastern Daylight Time)
Subject: Functional programming in Python
In-Reply-To: <3B0E9DD6.F16308C0@yale.edu>
Message-ID:
On Fri, 25 May 2001, Zhanyong Wan wrote:
> As you explained, the parse of an expression depends the types of the
> sub-expressions, which imo is BAD. Just consider type inference...
Ok, your complaint is that f a b c=a b c could have type
(a->b->c)->a->b->c or type (b->c)->(a->b)->a->c depending on the arguments
passed e.g. (f head (map +2) [3]) has different type from (f add 2 3).
Admittedly, this is different from how haskell type checks now. I guess
the question is whether it is impossible to type check or whether it just
requires modification to the type checking algorithm. Does anyone know?
-Alex-
___________________________________________________________________
S. Alexander Jacobson Shop.Com
1-646-638-2300 voice The Easiest Way To Shop (sm)
From jcab@roningames.com Sat May 26 03:52:03 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Fri, 25 May 2001 19:52:03 -0700
Subject: Functional programming in Python
Message-ID: <4.3.2.7.2.20010525195107.036d67c8@207.33.235.243>
At 05:25 PM 5/25/2001 -0400, S. Alexander Jacobson wrote:
>Admittedly, this is different from how haskell type checks now. I guess
>the question is whether it is impossible to type check or whether it just
>requires modification to the type checking algorithm. Does anyone know?
I don't think so... The only ambiguity that I can think of is with
passing functions as arguments to other functions, and you showed that it
can be resolved by currying:
map f x
would have to be force-curried using parenthesis:
(map f) x
because otherwise, it would mean:
map (f x)
which is both: very wrongly typed and NOT the intention.
I like your parsing scheme. I still DO like more explicit languages
better, though (i.e. map(f, x) style, like C & Co.). Currying is cool, but
it can be kept at a conceptual level, not affecting syntax.
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From sqrtofone@yahoo.com Mon May 28 04:46:37 2001
From: sqrtofone@yahoo.com (Jay Cox)
Date: Sun, 27 May 2001 22:46:37 -0500
Subject: Funny type.
References: <20010506160103.1A2EE255CE@www.haskell.org>
Message-ID: <3B11CA1D.B2729F88@yahoo.com>
One day I was playing around with types and I came across this type:
>data S m a = Nil | Cons a (m (S m a))
The idea being one could use generic functions with whatever monad m (of
course
m doesn't need to be a monad but my original idea was to be able to make
mutable lists with some sort of monad m.)
Anyway in attempting to define a generic show instance for the above
datatype I finally came upon:
>instance (Show a, Show (m (S m a))) => Show (S m a) where
> show Nil = "Nil"
> show (Cons x y) = "Cons " ++ show x ++ " " ++ show y
which boggles my mind. But hugs +98 and ghci
-fallow-undecidable-instances both allow it to compile but when i try
>show s
on
>s = Cons 3 [Cons 4 [], Cons 5 [Cons 2 [],Cons 3 []]]
(btw yes we are "nesting" an arbitrary lists here! however structurally
it really isnt much different from any tree datatype)
we get
ERROR -
*** The type checker has reached the cutoff limit while trying to
*** determine whether:
*** Show (S [] Integer)
*** can be deduced from:
*** ()
*** This may indicate that the problem is undecidable. However,
*** you may still try to increase the cutoff limit using the -c
*** option and then try again. (The current setting is -c998)
funny thing is apparently if you set -c to an odd number (on hugs)
it gives
*** The type checker has reached the cutoff limit while trying to
*** determine whether:
*** Show Integer
*** can be deduced from:
*** ()
why would it try to deduce Show integer?
Anyway, is my instance declaration still a bit mucked up?
Also, could there be a way to give a definition of show for S [] a?
Heres my sample definition just in case anybody is curious.
>myshow Nil = "Nil"
>myshow (Cons x y) = "Cons "++ show x ++
> " [" ++ blowup y ++ "]"
> where blowup (x:y:ls) = myshow x ++ "," ++ blowup (y:ls)
> blowup (x:[]) = myshow x
> blowup [] = ""
Thanks,
Jay Cox
yet another non-academic person on this list.
From Tom.Pledger@peace.com Mon May 28 05:53:43 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Mon, 28 May 2001 16:53:43 +1200
Subject: Functional programming in Python
In-Reply-To:
References: <3B0E9DD6.F16308C0@yale.edu>
Message-ID: <15121.55767.374165.793888@waytogo.peace.co.nz>
S. Alexander Jacobson writes:
| On Fri, 25 May 2001, Zhanyong Wan wrote:
| > As you explained, the parse of an expression depends the types of the
| > sub-expressions, which imo is BAD. Just consider type inference...
Also, we can no longer take a divide-and-conquer approach to reading
code, since the syntax may depend on the types of imports.
| Ok, your complaint is that f a b c=a b c could have type
| (a->b->c)->a->b->c or type (b->c)->(a->b)->a->c depending on the arguments
| passed e.g. (f head (map +2) [3]) has different type from (f add 2 3).
|
| Admittedly, this is different from how haskell type checks now. I guess
| the question is whether it is impossible to type check or whether it just
| requires modification to the type checking algorithm. Does anyone know?
Here's a troublesome example.
module M(trouble) where
f, g :: (a -> b) -> a -> b
f = undefined
g = undefined
trouble = (.) f g
-- ((.) f) g :: (a -> b) -> a -> b
-- (.) (f g) :: (a -> b -> c) -> a -> b -> c
Regards,
Tom
From dpt@math.harvard.edu Mon May 28 06:01:44 2001
From: dpt@math.harvard.edu (Dylan Thurston)
Date: Mon, 28 May 2001 01:01:44 -0400
Subject: Funny type.
In-Reply-To: <3B11CA1D.B2729F88@yahoo.com>
References: <20010506160103.1A2EE255CE@www.haskell.org> <3B11CA1D.B2729F88@yahoo.com>
Message-ID: <20010528010144.A30648@math.harvard.edu>
On Sun, May 27, 2001 at 10:46:37PM -0500, Jay Cox wrote:
> >data S m a = Nil | Cons a (m (S m a))
>...
> >instance (Show a, Show (m (S m a))) => Show (S m a) where
> > show Nil = "Nil"
> > show (Cons x y) = "Cons " ++ show x ++ " " ++ show y
...
> >show s
> >s = Cons 3 [Cons 4 [], Cons 5 [Cons 2 [],Cons 3 []]]
> Anyway, is my instance declaration still a bit mucked up?
Hmm. To try to deduce Show (S [] Integer), the type checker reduces
it by your instance declaration to Show [S [] Integer], which reduces
to Show (S [] Integer), which reduces to...
ghci or hugs could, in theory, be slightly smarter and handle this case.
> Also, could there be a way to give a definition of show for S [] a?
Yes. You could drop the generality:
instance (Show a) => Show (S [] a) where
show Nil = "Nil"
show (Cons x y) = "Cons " ++ show x ++ " " ++ show y
Really, the context you want is something like
instance (Show a, forall b. Show b => Show (m b)) => Show (S m b) ...
if that were legal.
--Dylan
From Tom.Pledger@peace.com Mon May 28 06:15:12 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Mon, 28 May 2001 17:15:12 +1200
Subject: Funny type.
In-Reply-To: <3B11CA1D.B2729F88@yahoo.com>
References: <20010506160103.1A2EE255CE@www.haskell.org>
<3B11CA1D.B2729F88@yahoo.com>
Message-ID: <15121.57056.23979.618943@waytogo.peace.co.nz>
Jay Cox writes:
| One day I was playing around with types and I came across this type:
|
| >data S m a = Nil | Cons a (m (S m a))
:
| >instance (Show a, Show (m (S m a))) => Show (S m a) where
| > show Nil = "Nil"
| > show (Cons x y) = "Cons " ++ show x ++ " " ++ show y
|
| which boggles my mind. But hugs +98 and ghci
| -fallow-undecidable-instances both allow it to compile but when i try
|
| >show s
|
| on
|
| >s = Cons 3 [Cons 4 [], Cons 5 [Cons 2 [],Cons 3 []]]
|
| (btw yes we are "nesting" an arbitrary lists here! however structurally
| it really isnt much different from any tree datatype)
|
| we get
|
|
| ERROR -
| *** The type checker has reached the cutoff limit while trying to
| *** determine whether:
| *** Show (S [] Integer)
| *** can be deduced from:
| *** ()
| *** This may indicate that the problem is undecidable. However,
| *** you may still try to increase the cutoff limit using the -c
| *** option and then try again. (The current setting is -c998)
|
|
| funny thing is apparently if you set -c to an odd number (on hugs)
| it gives
|
|
| *** The type checker has reached the cutoff limit while trying to
| *** determine whether:
| *** Show Integer
| *** can be deduced from:
| *** ()
|
| why would it try to deduce Show integer?
It's the first subgoal of Show (S [] Integer). If the cutoff were
greater by 1, presumably it's achieved, and then that last memory cell
is reused for the second subgoal Show [S [] Integer], which in turn
has a subgoal Show (S [] Integer), which overflows... as Dylan's just
pointed out, so I'll stop now.
- Tom
From Malcolm.Wallace@cs.york.ac.uk Mon May 28 10:23:58 2001
From: Malcolm.Wallace@cs.york.ac.uk (Malcolm Wallace)
Date: Mon, 28 May 2001 10:23:58 +0100
Subject: Functional programming in Python
In-Reply-To:
Message-ID: <2F8AAC4ZEjsergoA@cs.york.ac.uk>
It seems that right-associativity is so intuitive that even the person
proposing it doesn't get it right. :-) Partial applications are a
particular problem:
> Haskell Non-Haskell
> Left Associative Right Associative
> ------------From Prelude----------------------
> f x (foldr1 f xs) f x foldr1 f xs
Wouldn't the rhs actually mean f x (foldr1 (f xs)) in current notation?
> showChar '[' . shows x . showl xs showChar '[] shows x showl xs
Wouldn't the rhs actually mean showChar '[' (shows x (showl xs))
in current notation? This is quite different to the lhs composition.
For these two examples, the correct right-associative expressions,
as far as I can tell, should be:
f x (foldr1 f xs) f x (foldr1 f) xs
showChar '[' . shows x . showl xs showChar '[' . shows x . showl xs
Regards,
Malcolm
From simonpj@microsoft.com Mon May 28 10:02:29 2001
From: simonpj@microsoft.com (Simon Peyton-Jones)
Date: Mon, 28 May 2001 02:02:29 -0700
Subject: Funny type.
Message-ID: <37DA476A2BC9F64C95379BF66BA26902D72FB0@red-msg-09.redmond.corp.microsoft.com>
You need a language extension.
Check out Section 7 of "Derivable type classes"
http://research.microsoft.com/~simonpj/Papers/derive.htm
Alas, I have not implemented the idea yet.
(Partly because no one ever reports it as a problem; you=20
are the first!)
Simon
| One day I was playing around with types and I came across this type: =20
|=20
| >data S m a =3D Nil | Cons a (m (S m a))
|=20
| The idea being one could use generic functions with whatever=20
| monad m (of course m doesn't need to be a monad but my=20
| original idea was to be able to make mutable lists with some=20
| sort of monad m.)
|=20
| Anyway in attempting to define a generic show instance for=20
| the above datatype I finally came upon:
|=20
| >instance (Show a, Show (m (S m a))) =3D> Show (S m a) where
| > show Nil =3D "Nil"
| > show (Cons x y) =3D "Cons " ++ show x ++ " " ++ show y
|=20
| which boggles my mind. But hugs +98 and ghci=20
| -fallow-undecidable-instances both allow it to compile but when i try=20
From mark@chaos.x-philes.com Mon May 28 16:11:26 2001
From: mark@chaos.x-philes.com (Mark Carroll)
Date: Mon, 28 May 2001 11:11:26 -0400 (EDT)
Subject: Multithreaded stateful software
Message-ID:
Often I've found that quite how wonderful a programming language is isn't
clear until you've used it for a non-trivial project. So, I'm still
battling on with Haskell.
One of the projects I have coming up is a multi-threaded server that
manages many clients in performing a distributed computation using a
number of computers. So, we care about state, and control flow has some
concurrent threads and is partially event-driven.
Some possibilities come to my mind:
(a) This really isn't what Haskell was designed for, and if I try to write
this in Haskell I'll never want to touch it again.
(b) This project is quite feasible in Haskell but when it's done I'll feel
I should have just used Java or something.
(c) Haskell's monads, concurrency stuff and TCP/IP libraries are really
quite powerful and useful, and I'll be happy I picked Haskell for the
task.
Does anyone have any thoughts? (-: I have a couple of symbolic computation
tasks too that use complex data structures, which I'm sure that Haskell
would be great for, but I want it to be more generally useful than that
because, although it's nice to always use the best tool for the job, it's
also nice not to be using too many languages in-house.
-- Mark
From ken@digitas.harvard.edu Mon May 28 16:41:13 2001
From: ken@digitas.harvard.edu (Ken Shan)
Date: Mon, 28 May 2001 11:41:13 -0400
Subject: Funny type.
In-Reply-To: <3B11CA1D.B2729F88@yahoo.com>; from sqrtofone@yahoo.com on Sun, May 27, 2001 at 10:46:37PM -0500
References: <20010506160103.1A2EE255CE@www.haskell.org> <3B11CA1D.B2729F88@yahoo.com>
Message-ID: <20010528114113.B5546@digitas.harvard.edu>
--IS0zKkzwUGydFO0o
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
On 2001-05-27T22:46:37-0500, Jay Cox wrote:
> >data S m a =3D Nil | Cons a (m (S m a))
>=20
> >instance (Show a, Show (m (S m a))) =3D> Show (S m a) where
> > show Nil =3D "Nil"
> > show (Cons x y) =3D "Cons " ++ show x ++ " " ++ show y
Here's how I've been handling such situations:
data S m a =3D Nil | Cons a (m (S m a))
-- "ShowF f" means that the functor f "preserves Show"
class ShowF f where
showsPrecF :: (Int -> a -> ShowS) -> (Int -> f a -> ShowS)
-- "instance ShowF []" is based on showList
instance ShowF [] where
showsPrecF sh p [] =3D showString "[]"
showsPrecF sh p (x:xs) =3D showChar '[' . sh 0 x . showl xs
where showl [] =3D showChar ']'
showl (x:xs) =3D showChar ',' . sh 0 x . showl =
xs
-- S preserves ShowF
instance (ShowF m) =3D> ShowF (S m) where
showsPrecF sh p Nil =3D showString "Nil"
showsPrecF sh p (Cons x y) =3D showString "Cons "
. sh 0 x . showChar ' ' . showsPrecF (showsPrecF sh) 0 y
-- Now we can define "instance Show (S m a)" as desired
instance (Show a, ShowF m) =3D> Show (S m a) where
showsPrec =3D showsPrecF showsPrec
You could call it the poor man's generic programming...
--=20
Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig
>>My SUV is bigger than your bike. Stay out of the damn road!
>Kiss my reflector, SUV-boy
I'm too busy sucking on my tailpipe, bike dude.
--IS0zKkzwUGydFO0o
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: For info see http://www.gnupg.org
iD8DBQE7EnGZzjAc4f+uuBURAhfTAJ4/Q6vqdt9qkViZT29jR/OSR9FTmgCgntp0
k8Ng3aJE/mPrTQnZqOPXl78=
=sZpS
-----END PGP SIGNATURE-----
--IS0zKkzwUGydFO0o--
From qrczak@knm.org.pl Mon May 28 17:13:47 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 28 May 2001 16:13:47 GMT
Subject: Functional programming in Python
References: <9etq1o$lof$1@qrnik.zagroda>
Message-ID:
Mon, 28 May 2001 10:23:58 +0100, Malcolm Wallace pisze:
> It seems that right-associativity is so intuitive that even the
> person proposing it doesn't get it right. :-)
And even those who correct them :-)
>> f x (foldr1 f xs) f x foldr1 f xs
>
> Wouldn't the rhs actually mean f x (foldr1 (f xs)) in current notation?
No: f (x (foldr1 (f xs)))
Basically Haskell's style uses curried functions, so it's essential
to be able to apply a function to multiple parameters without a number
of nested parentheses.
BTW, before I knew Haskell I exprimented with a syntax in which 'x f'
is the application of 'f' to 'x', and 'x f g' means '(x f) g'. Other
arguments can also be on the right, but in this case with parentheses,
e.g. 'x f (y)' is a function f applied to two arguments.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From jenglish@flightlab.com Mon May 28 17:21:47 2001
From: jenglish@flightlab.com (Joe English)
Date: Mon, 28 May 2001 09:21:47 -0700
Subject: Multithreaded stateful software
In-Reply-To:
References:
Message-ID: <200105281621.JAA27593@dragon.flightlab.com>
Mark Carroll wrote:
> One of the projects I have coming up is a multi-threaded server that
> manages many clients in performing a distributed computation using a
> number of computers. [...]
>
> (a) This really isn't what Haskell was designed for, and if I try to write
> this in Haskell I'll never want to touch it again.
>
> (b) This project is quite feasible in Haskell but when it's done I'll feel
> I should have just used Java or something.
>
> (c) Haskell's monads, concurrency stuff and TCP/IP libraries are really
> quite powerful and useful, and I'll be happy I picked Haskell for the
> task.
There's also:
(d) You end up learning all sorts of new things about distributed
processing (as well as Haskell) and, armed with the new knowledge,
future problems of the same nature will be easier to solve
no matter what language you use.
That's what usually happens to me.
(Personally, if I had this project coming up, I'd use it
as an excuse to finally learn Erlang...)
--Joe English
jenglish@flightlab.com
From simonpj@microsoft.com Mon May 28 17:28:07 2001
From: simonpj@microsoft.com (Simon Peyton-Jones)
Date: Mon, 28 May 2001 09:28:07 -0700
Subject: Multithreaded stateful software
Message-ID: <37DA476A2BC9F64C95379BF66BA26902D72FD0@red-msg-09.redmond.corp.microsoft.com>
| (c) Haskell's monads, concurrency stuff and TCP/IP libraries=20
| are really quite powerful and useful, and I'll be happy I=20
| picked Haskell for the task.
Definitely (c). See Simon Marlow's paper about his experience
of writing a web server (highly concurrent), and my tutorial
"Tackling the awkward squad". Both at
http://research.microsoft.com/~simonpj/papers/marktoberdorf.htm
Haskell is a great language for writing concurrent applications.
Simon
From mark@chaos.x-philes.com Mon May 28 19:02:48 2001
From: mark@chaos.x-philes.com (Mark Carroll)
Date: Mon, 28 May 2001 14:02:48 -0400 (EDT)
Subject: Multithreaded stateful software
In-Reply-To: <37DA476A2BC9F64C95379BF66BA26902D72FD0@red-msg-09.redmond.corp.microsoft.com>
Message-ID:
On Mon, 28 May 2001, Simon Peyton-Jones wrote:
(snip)
> http://research.microsoft.com/~simonpj/papers/marktoberdorf.htm
>
> Haskell is a great language for writing concurrent applications.
Thanks! That's very interesting. In a way, I guess I'm taking something of
a leap of faith: if everything goes to plan, then the code may be used for
quite some time, being extended when necessary, so I must hope that
various useful GHC extensions, perhaps slightly modified, will go on to
hopefully be preserved and maintained in some form in GHC or some other
Haskell compiler. In choice of programming language, it's hard to trade
off wanting certainty of a good, free compiler still existing in a few
years' time that can compile your code with minimal tinkering, against
wanting to actually benefit from a lot of the important programming
language research that's gone on. (-:
I get the feeling that, although experimental, a lot of the various
extensions are probably more or less the way things will go and, of
languages in its class, Haskell seems to be doing really quite well, so
I'm not all that worried; really I'm just noting the issue.
But, back to the main point: thanks very much! These papers give me some
faith that maybe Haskell is now as generally useful as I'd hoped.
-- Mark
From Tom.Pledger@peace.com Mon May 28 21:59:04 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Tue, 29 May 2001 08:59:04 +1200
Subject: Why is there a space leak here?
In-Reply-To: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout>
References: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout>
Message-ID: <15122.48152.852254.22700@waytogo.peace.co.nz>
David Bakin writes:
:
| I have been puzzling over this for nearly a full day (getting this
| reduced version from my own code which wasn't working). In
| general, how can I either a) analyze code looking for a space leak
| or b) experiment (e.g., using Hugs) to find a space leak? Thanks!
| -- Dave
a) Look at how much of the list needs to exist at any one time.
| -- This has a space leak, e.g., when reducing (length (foo1 1000000))
| foo1 m
| = take m v
| where
| v = 1 : flatten (map triple v)
| triple x = [x,x,x]
When you consume the (3N)th cell of v, you can't yet garbage collect
the Nth cell because it will be needed for generating the (3N+1)th,
(3N+2)th and (3N+3)th.
So, as you proceed along the list, about two thirds of it must be
retained in memory.
| -- This has no space leak, e.g., when reducing (length (foo2 1000000))
| foo2 m
| = take m v
| where
| v = 1 : flatten (map single v)
| single x = [x]
By contrast, when you consume the (N+1)th cell of this v, you free up
the Nth, so foo2 runs in constant space.
| -- flatten a list-of-lists
| flatten :: [[a]] -> [a]
:
Rather like concat?
Regards,
Tom
From mg169780@students.mimuw.edu.pl Mon May 28 22:25:21 2001
From: mg169780@students.mimuw.edu.pl (Michal Gajda)
Date: Mon, 28 May 2001 23:25:21 +0200 (CEST)
Subject: Why is there a space leak here?
In-Reply-To: <15122.48152.852254.22700@waytogo.peace.co.nz>
Message-ID:
On Tue, 29 May 2001, Tom Pledger wrote:
> David Bakin writes:
>
> a) Look at how much of the list needs to exist at any one time.
>
> | -- This has a space leak, e.g., when reducing (length (foo1 1000000))
> | foo1 m
> | = take m v
> | where
> | v = 1 : flatten (map triple v)
> | triple x = [x,x,x]
>
> When you consume the (3N)th cell of v, you can't yet garbage collect
> the Nth cell because it will be needed for generating the (3N+1)th,
> (3N+2)th and (3N+3)th.
>
> So, as you proceed along the list, about two thirds of it must be
> retained in memory.
Last sentence seems false. You free up Nth cell of v when you finish with
3Nth cell of result.
> | -- This has no space leak, e.g., when reducing (length (foo2 1000000))
> | foo1 m
(...the only difference below:)
> | single x = [x]
Greetings :-)
Michal Gajda
korek@icm.edu.pl
*knowledge-hungry student*
From Tom.Pledger@peace.com Mon May 28 22:48:58 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Tue, 29 May 2001 09:48:58 +1200
Subject: Why is there a space leak here?
In-Reply-To:
References: <15122.48152.852254.22700@waytogo.peace.co.nz>
Message-ID: <15122.51146.136004.405892@waytogo.peace.co.nz>
Michal Gajda writes:
| On Tue, 29 May 2001, Tom Pledger wrote:
:
| > When you consume the (3N)th cell of v, you can't yet garbage collect
| > the Nth cell because it will be needed for generating the (3N+1)th,
| > (3N+2)th and (3N+3)th.
| >
| > So, as you proceed along the list, about two thirds of it must be
| > retained in memory.
|
| Last sentence seems false. You free up Nth cell of v when you
| finish with 3Nth cell of result.
I counted from 0. Scouts' honour. Call (!!) as a witness.
;-)
From davidbak@cablespeed.com Mon May 28 23:22:10 2001
From: davidbak@cablespeed.com (David Bakin)
Date: Mon, 28 May 2001 15:22:10 -0700
Subject: Why is there a space leak here?
References: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout> <15122.48152.852254.22700@waytogo.peace.co.nz>
Message-ID: <001a01c0e7c4$a6c887e0$5900a8c0@girlsprout>
Ah, thank you for pointing out concat to me. (Oddly, without knowing about
concat, I had tried foldr1 (++) and also foldl1 (++) but got the same space
problem and so tried to 'factor it out'.)
OK, now I see what's going on - your explanation is good, thanks.
Which of the various tools built-in or added to Hugs, GHC, NHC, etc. would
help me visualize what's actually going on here? I think Hood would (using
a newer Hugs, of course, I'm going to try it). What else?
-- Dave
----- Original Message -----
From: "Tom Pledger"
To: "David Bakin"
Cc:
Sent: Monday, May 28, 2001 1:59 PM
Subject: Why is there a space leak here?
> David Bakin writes:
> :
> | I have been puzzling over this for nearly a full day (getting this
> | reduced version from my own code which wasn't working). In
> | general, how can I either a) analyze code looking for a space leak
> | or b) experiment (e.g., using Hugs) to find a space leak? Thanks!
> | -- Dave
>
> a) Look at how much of the list needs to exist at any one time.
>
> | -- This has a space leak, e.g., when reducing (length (foo1 1000000))
> | foo1 m
> | = take m v
> | where
> | v = 1 : flatten (map triple v)
> | triple x = [x,x,x]
>
> When you consume the (3N)th cell of v, you can't yet garbage collect
> the Nth cell because it will be needed for generating the (3N+1)th,
> (3N+2)th and (3N+3)th.
>
> So, as you proceed along the list, about two thirds of it must be
> retained in memory.
>
> | -- This has no space leak, e.g., when reducing (length (foo2 1000000))
> | foo2 m
> | = take m v
> | where
> | v = 1 : flatten (map single v)
> | single x = [x]
>
> By contrast, when you consume the (N+1)th cell of this v, you free up
> the Nth, so foo2 runs in constant space.
>
> | -- flatten a list-of-lists
> | flatten :: [[a]] -> [a]
> :
>
> Rather like concat?
>
> Regards,
> Tom
>
From jcab@roningames.com Tue May 29 01:53:38 2001
From: jcab@roningames.com (Juan Carlos Arevalo Baeza)
Date: Mon, 28 May 2001 17:53:38 -0700
Subject: Type resolution problem
In-Reply-To: <20010528114113.B5546@digitas.harvard.edu>
References: <3B11CA1D.B2729F88@yahoo.com>
<20010506160103.1A2EE255CE@www.haskell.org>
<3B11CA1D.B2729F88@yahoo.com>
Message-ID: <4.3.2.7.2.20010528173801.03795668@207.33.235.243>
I'm having a little of a problem with a project of mine. Just check out
this, which is the minimum piece of code that will show this problem:
----
type MyType a = a
(+++) :: MyType a -> MyType a -> MyType a
a +++ b = a
class MyClass a where
baseVal :: a
val1 :: MyClass a => MyType a
val1 = baseVal
val2 :: MyClass a => MyType a
val2 = baseVal
-- combVal :: MyClass a => MyType a
combVal = val1 +++ val2 -- line 18 is here...
----
Trying this in Hugs returns the following error:
----
ERROR E:\JCAB\Haskell\testcv.hs:18 - Unresolved top-level overloading
*** Binding : combVal
*** Outstanding context : MyClass b
----
Now, how can it possibly say that the context "MyClass b" is
outstanding? What does this mean?
Uncommenting the type expression above clears the error. But, why can't
the compiler deduce it by itself? I mean, if a function has type shaped
like in (a -> a -> a) and it is used with parameters (cv => a) where the
constrains are identical, then the result MUST be (cv => a) too, right? Or
am I missing something here?
Don't get me wrong, I can just put type declarations everywhere in my
code. It's a good thing, too. But this problem is really nagging at me
because I don't get where the problem is.
Any ideas or pointers?
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
From chak@cse.unsw.edu.au Tue May 29 05:15:58 2001
From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty)
Date: Tue, 29 May 2001 14:15:58 +1000
Subject: Multithreaded stateful software
In-Reply-To:
References:
Message-ID: <20010529141558E.chak@cse.unsw.edu.au>
Mark Carroll wrote,
> One of the projects I have coming up is a multi-threaded server that
> manages many clients in performing a distributed computation using a
> number of computers. So, we care about state, and control flow has some
> concurrent threads and is partially event-driven.
>
> Some possibilities come to my mind:
>
> (a) This really isn't what Haskell was designed for, and if I try to write
> this in Haskell I'll never want to touch it again.
>
> (b) This project is quite feasible in Haskell but when it's done I'll feel
> I should have just used Java or something.
>
> (c) Haskell's monads, concurrency stuff and TCP/IP libraries are really
> quite powerful and useful, and I'll be happy I picked Haskell for the
> task.
In my experience features such as Haskell's type system and
the ease with which you can handle higher-order functions
are extremely useful in code that has to deal with state and
concurrency.
I guess, the main problem with using Haskell for these kinds
of applications is that relatively little has been written
about them yet. SimonPJ's paper "Tackling the awkward
squad" and SimonM's Web server improved the situation, but,
for example, none of the Haskell textbooks covers these
features. Nevertheless, there are quite a number of people
now who have used Haskell in ways similar to what you need.
So, don't hesitate to ask on this or other Haskell lists if
you have questions or need example code.
Cheers,
Manuel
From qrczak@knm.org.pl Tue May 29 06:41:19 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 29 May 2001 05:41:19 GMT
Subject: Type resolution problem
References: <3B11CA1D.B2729F88@yahoo.com> <20010506160103.1A2EE255CE@www.haskell.org> <3B11CA1D.B2729F88@yahoo.com> <9evad0$rc7$1@qrnik.zagroda>
Message-ID:
Mon, 28 May 2001 17:53:38 -0700, Juan Carlos Arevalo Baeza pisze:
> Uncommenting the type expression above clears the error.
> But, why can't the compiler deduce it by itself?
Monomorphism restriction strikes again. See section 4.5.5 in the
Haskell 98 Report. A pattern binding without an explicit type signature
is considered monomorphic (must resolve to a simgle non-overloaded
type) and points of usage or defaulting rules for numeric types
determine the type (neither applies here).
IMHO monomorphism restriction should be removed (the case for bindings
of the form var = expr without type signature).
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From kahl@heraklit.informatik.unibw-muenchen.de Tue May 29 08:49:55 2001
From: kahl@heraklit.informatik.unibw-muenchen.de (kahl@heraklit.informatik.unibw-muenchen.de)
Date: 29 May 2001 07:49:55 -0000
Subject: Why is there a space leak here?
In-Reply-To: <001a01c0e7c4$a6c887e0$5900a8c0@girlsprout>
(davidbak@cablespeed.com)
References: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout> <15122.48152.852254.22700@waytogo.peace.co.nz> <001a01c0e7c4$a6c887e0$5900a8c0@girlsprout>
Message-ID: <20010529074955.4372.qmail@dionysos.informatik.unibw-muenchen.de>
David Bakin writes:
> Which of the various tools built-in or added to Hugs, GHC, NHC, etc. would
> help me visualize what's actually going on here? I think Hood would (using
> a newer Hugs, of course, I'm going to try it). What else?
I just used my old ghc-4.06 add-in ``middle-end'' ``MHA'' to generate
a HOPS module from David's message (slightly massaged, appended below),
and then used HOPS to generate two animations:
http://ist.unibw-muenchen.de/kahl/MHA/Bakin_foo1_20.ps.gz
http://ist.unibw-muenchen.de/kahl/MHA/Bakin_foo2_20.ps.gz
Hold down the space key in ghostview to get the animation effect!
The left ``fan'', present in both examples, is the result list,
and only takes up space in reality as long as it is used.
The right ``fan'', visible only in foo1, contains the cycle
of the definition of v, and represents the space leak.
The take copies cons node away from the cycle.
The HOPS input was generated automatically by an unreleased
GHC ``middle end'' that is still stuck at ghc-4.06.
The homepage of my term graph programming system HOPS is:
http://ist.unibw-muenchen.de/kahl/HOPS/
Wolfram
> module Bakin where
-- This has a space leak, e.g., when reducing (length (foo1 1000000))
> foo1 :: Int -> [Int]
> foo1 m
> = take m v
> where
> v = 1 : flatten (map triple v)
> triple x = [x,x,x]
-- This has no space leak, e.g., when reducing (length (foo2 1000000))
> foo2 :: Int -> [Int]
> foo2 m
> = take m v
> where
> v = 1 : flatten (map single v)
> single x = [x]
-- flatten a list-of-lists
> flatten :: [[a]] -> [a]
> flatten [] = []
> flatten ([]:xxs) = flatten xxs
> flatten ((x':xs'):xxs) = x' : flatten' xs' xxs
> flatten' [] xxs = flatten xxs
> flatten' (x':xs') xxs = x': flatten' xs' xxs
From koen@cs.chalmers.se Tue May 29 09:29:01 2001
From: koen@cs.chalmers.se (Koen Claessen)
Date: Tue, 29 May 2001 10:29:01 +0200 (MET DST)
Subject: Funny type.
In-Reply-To: <20010528114113.B5546@digitas.harvard.edu>
Message-ID:
Jay Cox complained that the following is not possible:
| data S m a = Nil | Cons a (m (S m a))
|
| instance (Show a, Show (m (S m a))) => Show (S m a) where
| show Nil = "Nil"
| show (Cons x y) = "Cons " ++ show x ++ " " ++ show y
Ken Shan answered:
| Here's how I've been handling such situations:
|
| data S m a = Nil | Cons a (m (S m a))
|
| -- "ShowF f" means that the functor f "preserves Show"
| class ShowF f where
| showsPrecF :: (Int -> a -> ShowS) -> (Int -> f a -> ShowS)
Actually, this class definition can be simplified to:
class ShowF f where
showsPrecF :: Show a => Int -> f a -> ShowS
And the rest of Ken's code accordingly.
/Koen.
--
Koen Claessen http://www.cs.chalmers.se/~koen
phone:+46-31-772 5424 mailto:koen@cs.chalmers.se
-----------------------------------------------------
Chalmers University of Technology, Gothenburg, Sweden
From davidbak@cablespeed.com Tue May 29 09:31:56 2001
From: davidbak@cablespeed.com (David Bakin)
Date: Tue, 29 May 2001 01:31:56 -0700
Subject: Why is there a space leak here?
References: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout> <15122.48152.852254.22700@waytogo.peace.co.nz> <001a01c0e7c4$a6c887e0$5900a8c0@girlsprout> <20010529074955.4372.qmail@dionysos.informatik.unibw-muenchen.de>
Message-ID: <00a001c0e819$da218470$5900a8c0@girlsprout>
That's a very nice visualization - exactly the kind of thing I was hoping
for. I grabbed your papers and will look over them for more information,
thanks very much for taking the trouble! The animations you sent me - and
the ones on your page - are really nice; it would be nice to have a system
like HOPS available with GHC/GHCi (I understand it is more than the
visualization system, but that's a really useful part of it).
(I also found out that Hood didn't help for this particular purpose - though
now that I see how easy it is to use I'll be using it all the time. But it
is carefully designed to show you ("observe") exactly what has been
evaluated at a given point in the program. Thus you can't use it (as far as
I can tell) to show the data structures that are accumulating that haven't
been processed yet - which is what you need to know to find a space
problem.)
-- Dave
----- Original Message -----
From:
To:
Cc:
Sent: Tuesday, May 29, 2001 12:49 AM
Subject: Re: Why is there a space leak here?
>
> David Bakin writes:
>
> > Which of the various tools built-in or added to Hugs, GHC, NHC, etc.
would
> > help me visualize what's actually going on here? I think Hood would
(using
> > a newer Hugs, of course, I'm going to try it). What else?
>
> I just used my old ghc-4.06 add-in ``middle-end'' ``MHA'' to generate
> a HOPS module from David's message (slightly massaged, appended below),
> and then used HOPS to generate two animations:
>
> http://ist.unibw-muenchen.de/kahl/MHA/Bakin_foo1_20.ps.gz
> http://ist.unibw-muenchen.de/kahl/MHA/Bakin_foo2_20.ps.gz
>
> Hold down the space key in ghostview to get the animation effect!
>
> The left ``fan'', present in both examples, is the result list,
> and only takes up space in reality as long as it is used.
> The right ``fan'', visible only in foo1, contains the cycle
> of the definition of v, and represents the space leak.
> The take copies cons node away from the cycle.
>
> The HOPS input was generated automatically by an unreleased
> GHC ``middle end'' that is still stuck at ghc-4.06.
>
> The homepage of my term graph programming system HOPS is:
>
> http://ist.unibw-muenchen.de/kahl/HOPS/
>
>
> Wolfram
>
>
>
>
> > module Bakin where
>
> -- This has a space leak, e.g., when reducing (length (foo1 1000000))
>
> > foo1 :: Int -> [Int]
> > foo1 m
> > = take m v
> > where
> > v = 1 : flatten (map triple v)
> > triple x = [x,x,x]
>
> -- This has no space leak, e.g., when reducing (length (foo2 1000000))
>
> > foo2 :: Int -> [Int]
> > foo2 m
> > = take m v
> > where
> > v = 1 : flatten (map single v)
> > single x = [x]
>
> -- flatten a list-of-lists
>
> > flatten :: [[a]] -> [a]
> > flatten [] = []
> > flatten ([]:xxs) = flatten xxs
> > flatten ((x':xs'):xxs) = x' : flatten' xs' xxs
> > flatten' [] xxs = flatten xxs
> > flatten' (x':xs') xxs = x': flatten' xs' xxs
>
From karczma@info.unicaen.fr Tue May 29 10:00:41 2001
From: karczma@info.unicaen.fr (Jerzy Karczmarczuk)
Date: Tue, 29 May 2001 11:00:41 +0200
Subject: Functional programming in Python
References: <9etq1o$lof$1@qrnik.zagroda>
Message-ID: <3B136539.220C18EB@info.unicaen.fr>
Marcin Kowalczyk:
> BTW, before I knew Haskell I exprimented with a syntax in which 'x f'
> is the application of 'f' to 'x', and 'x f g' means '(x f) g'. Other
> arguments can also be on the right, but in this case with parentheses,
> e.g. 'x f (y)' is a function f applied to two arguments.
Hmmm. An experimental syntax, you say...
Oh, say, you reinvented FORTH?
(No args in parentheses there, a function taking something at its right
simply *knows* that there is something there).
Jerzy Karczmarczuk
Caen, France
From olaf@cs.york.ac.uk Tue May 29 11:28:36 2001
From: olaf@cs.york.ac.uk (Olaf Chitil)
Date: Tue, 29 May 2001 11:28:36 +0100
Subject: Why is there a space leak here?
References: <006901c0e7b4$792c9bb0$5900a8c0@girlsprout> <15122.48152.852254.22700@waytogo.peace.co.nz> <001a01c0e7c4$a6c887e0$5900a8c0@girlsprout> <20010529074955.4372.qmail@dionysos.informatik.unibw-muenchen.de> <00a001c0e819$da218470$5900a8c0@girlsprout>
Message-ID: <3B1379D4.232246FC@cs.york.ac.uk>
David Bakin wrote:
>
> That's a very nice visualization - exactly the kind of thing I was hoping
> for. I grabbed your papers and will look over them for more information,
> thanks very much for taking the trouble! The animations you sent me - and
> the ones on your page - are really nice; it would be nice to have a system
> like HOPS available with GHC/GHCi (I understand it is more than the
> visualization system, but that's a really useful part of it).
>
> (I also found out that Hood didn't help for this particular purpose - though
> now that I see how easy it is to use I'll be using it all the time. But it
> is carefully designed to show you ("observe") exactly what has been
> evaluated at a given point in the program. Thus you can't use it (as far as
> I can tell) to show the data structures that are accumulating that haven't
> been processed yet - which is what you need to know to find a space
> problem.)
You can use GHood,
http://www.cs.ukc.ac.uk/people/staff/cr3/toolbox/haskell/GHood/ .
It animates the evaluation process at a given point in the program.
It doesn't show unevaluated expressions, but for most purposes you don't
need to see them. Most important is to see when which subexpression is
evaluated.
(an older version of Hood, which is distributed with nhc, also has an
animation facility)
At some time we also hope to provide such an animation viewer for our
Haskell tracer Hat. The trace already contains all the necessary
information.
Ciao,
Olaf
--
OLAF CHITIL,
Dept. of Computer Science, University of York, York YO10 5DD, UK.
URL: http://www.cs.york.ac.uk/~olaf/
Tel: +44 1904 434756; Fax: +44 1904 432767
From ketil@ii.uib.no Tue May 29 21:44:38 2001
From: ketil@ii.uib.no (Ketil Malde)
Date: 29 May 2001 22:44:38 +0200
Subject: Functional programming in Python
In-Reply-To: Jerzy Karczmarczuk's message of "Tue, 29 May 2001 11:00:41 +0200"
References: <9etq1o$lof$1@qrnik.zagroda>
<3B136539.220C18EB@info.unicaen.fr>
Message-ID:
Jerzy Karczmarczuk writes:
>> BTW, before I knew Haskell I exprimented with a syntax in which 'x f'
>> is the application of 'f' to 'x', and 'x f g' means '(x f) g'.
> Hmmm. An experimental syntax, you say...
> Oh, say, you reinvented FORTH?
Wouldn't
x f g
in a Forth'ish machine mean
g(f,x) -- using "standard" math notation, for a change
rather than
g(f(x))
?
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
From me@nellardo.com Tue May 29 23:45:44 2001
From: me@nellardo.com (Brook Conner)
Date: Tue, 29 May 2001 18:45:44 -0400
Subject: Functional programming in Python
In-Reply-To: