From wohlstad@cs.ucdavis.edu Thu Jan 4 03:49:16 2001
Date: Wed, 3 Jan 2001 19:49:16 -0800 (PST)
From: Eric Allen Wohlstadter wohlstad@cs.ucdavis.edu
Subject: Problems compiling HDirect 0.17
When I try to compile hdirect on cygwin on my NT I get a lot of complaints
from ghc about bad tokens. Particularly the use of "\" to continue a
line. For some reason it doesn't see it as the line continuation
character.
Also, it compiled ok on a different machine but it complains about a
missing link to CYGWIN1.DLL when I try to do anything.
Any help would be appreciated.
Eric Wohlstadter
From fjh@cs.mu.oz.au Thu Jan 4 03:59:56 2001
Date: Thu, 4 Jan 2001 14:59:56 +1100
From: Fergus Henderson fjh@cs.mu.oz.au
Subject: Problems compiling HDirect 0.17
On 03-Jan-2001, Eric Allen Wohlstadter <wohlstad@cs.ucdavis.edu> wrote:
> When I try to compile hdirect on cygwin on my NT I get a lot of complaints
> from ghc about bad tokens. Particularly the use of "\" to continue a
> line. For some reason it doesn't see it as the line continuation
> character.
You need to make sure that the files are mounted as binary
rather than as text, or vice versa. Try the cygwin `mount' command.
--
Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
From russell@brainlink.com Thu Jan 4 04:11:38 2001
Date: Wed, 03 Jan 2001 23:11:38 -0500
From: Benjamin L. Russell russell@brainlink.com
Subject: Learning Haskell and FP
On Wed, 03 Jan 2001 11:26:53 -0500
Michael Zawrotny <zawrotny@gecko.sb.fsu.edu> wrote:
>
> [snip]
>
> The reason that I found GITH difficult wasn't that the
> concept
> of programming with functions/functional style was new to
> me. What got me was that the concepts and notations were
> much
> more "mathematical" than "programmatic". In my
> explorations
> of various languages, my experience with introductions to
> scheme/CL has mostly been that they tend to show how to
> do
> things that are familiar to imperative programmers, plus
> all
> of the things that you can do with functions as first
> class
> values. With intros to SML, OCaml and Haskell, there is
> a
> much greater emphasis on types, type systems, and
> provable
> program correctness.
>
> [snip]
>
> The thing that I would most like to see would entitled "A
> Practical Guide to Haskell" or something of that nature.
>
> [snip]
>
> One is tempted to come to the conclusion that Haskell is
> not
> suited for "normal" programmers writing "normal"
> programs.
How would you define a "'normal' programmer writing 'normal' programs?" What exactly is a "'normal' program?"
(Perhaps another way of phrasing the issue is as the "declarative" vs. "procedural" distinction, since the issue seems to be that of "what is" (types) vs. "how to" (imperative expression; i.e., procedures).)
While I agree that "A Practical Guide to Haskell" would indeed be a suitable alternative for programmers from the procedural school of expression, I would caution that such an introduction would probably not be suitable for all.
If I may give my own case as an example, I studied both C and Scheme (in addition to auditing a course in Haskell) in college, and favored Scheme over C precisely because of my Scheme course's emphasis on provable program correctness. This is largely a matter of background and taste: my course background was relatively focused on the design and analysis of algorithms, with provable program correctness being a related topic.
Perhaps, ideally, two separate tutorials (or perhaps a single tutorial with two sections based on different viewpoints?) may be needed? The difficulty is that the conceptual distance between the declarative and procedural schools of thought seems too great to be bridged by a single viewpoint. It seems that any introduction favoring either one would risk alienating the other.
Personally, I would really prefer "A Gentle Elementary Introduction to Haskell: Elements of the Haskell School of Expression with Practical Examples," but some would no doubt choose "Haskell in a Nutshell: How to Write Practical Programs in Haskell."
--Ben
--
Benjamin L. Russell
russell@brainlink.com
benjamin.russell.es.94@aya.yale.edu
"Furuike ya! Kawazu tobikomu mizu no oto." --Matsuo Basho
From fruehr@willamette.edu Thu Jan 4 06:55:17 2001
Date: Wed, 3 Jan 2001 22:55:17 -0800 (PST)
From: Fritz K Ruehr fruehr@willamette.edu
Subject: Learning Haskell and FP
Benjamin Russell (russell@brainlink.com) wrote:
> Personally, I would really prefer "A Gentle Elementary
> Introduction to Haskell: Elements of the Haskell School of
> Expression with Practical Examples," but some would no doubt
> choose "Haskell in a Nutshell: How to Write Practical Programs
> in Haskell."
An O'Reilly "nutshell" book is an even better suggestion than
my "Design Patterns in Haskell" of a few days back, at least
from the perspective of marketing and promotion.
But it raises the issue of an appropriate animal mascot for
the cover; I can only come up with the Uakari, an exotic-looking
rainforest monkey, which sounds similar to "Curry".
(look here for a picture:)
<http://www.animalsoftherainforest.com/uakari.htm>
One possibly relevant point: the site notes that the Uakari is
"mainly arboreal (tree-dwelling)". On the other hand, this means
that it is probably threatened by deforestation, whereas this
phenomenon can be of great help in Haskell :) .
-- Fritz Ruehr
fruehr@willamette.edu
From russell@brainlink.com Fri Jan 5 00:04:10 2001
Date: Thu, 04 Jan 2001 19:04:10 -0500
From: Benjamin L. Russell russell@brainlink.com
Subject: Learning Haskell and FP
On Wed, 3 Jan 2001 22:55:17 -0800 (PST)
Fritz K Ruehr <fruehr@willamette.edu> wrote:
>
> [snip]
>
> An O'Reilly "nutshell" book is an even better suggestion
> than
> my "Design Patterns in Haskell" of a few days back, at
> least
> from the perspective of marketing and promotion.
>
> But it raises the issue of an appropriate animal mascot
> for
> the cover; I can only come up with the Uakari, an
> exotic-looking
> rainforest monkey, which sounds similar to "Curry".
>
> (look here for a picture:)
>
> <http://www.animalsoftherainforest.com/uakari.htm>
Lalit Pant ( lalitp@acm.org ) (alternatively, lalit_pant@yahoo.com ) wrote an article in the May 2000 issue of _Java Report_ entitled "Developing Intelligent Systems With Java and Prolog" that described a Prolog implementation of the A-star search algorithm. Lalit stated that Prolog was useful for algorithm prototyping.
Perhaps Lalit Pant and Simon Peyton Jones could collaborate together on an article, perhaps overseen by Paul Hudak, on prototyping search algorithms in Haskell, also for _The Java Report?_ If this article then had a high readership, maybe the article's success could then justify publication of an O'Reilly _Haskell in a Nutshell_ book?
--Ben
P. S. (Hi Doug Fields. I didn't know that you were reading this mailing list. I guess that I should also greet Professor Paul Hudak: Hello, Professor Hudak. Sorry about Collectively Speaking. How's jazz in general?)
Benjamin L. Russell
russell@brainlink.com
benjamin.russell.es.94@aya.yale.edu
"Furuike ya! Kawazu tobikomu mizu no oto." --Matsuo Basho
From patrick@watson.org Fri Jan 5 15:26:19 2001
Date: Fri, 5 Jan 2001 10:26:19 -0500 (EST)
From: Patrick M Doane patrick@watson.org
Subject: Learning Haskell and FP
On Wed, 3 Jan 2001, Simon Peyton-Jones wrote:
> I'm sure that's right. Trouble is, we're the last people qualified
> to write one!
>
> Here's a suggestion: would someone like to write such a guide,
> from the point of view of a beginner, leaving blanks that we can fill in,
> when you come across a task or issue you don't know the answer
> to? That is, you provide the skeleton, and we fill in the blanks.
I read your paper on taming the Awkward Squad several months ago as my
first exposure to Haskell. I think it is an excellent paper and really
convinced me that Haskell was worthwhile to learn and use.
There are aspects to the paper that are like a tutorial, but I think it
would be overwhelming for a programmer not used to reading papers from
academia.
I think a really good beginner's tutorial on I/O could be started from this
paper:
- Start immediately with using the 'do expression' and don't
worry about the history that led to its development.
- Avoid mentioning 'monad' and other mathematical terms until much
latter in the game. It is better to see the system in action and then
find out it has a nice solid foundation.
Many people are also annoyed by an author using new vocabulary even
if it is well defined. It's better to get them comfortable with the
system first.
- Take advantage of the 2d syntax rules to avoid the unneeded braces
and semicolons. Syntax with little punctuation seems to go a long way
with many programmers. Pointing out the similarities to Python here
could be appropriate.
- After working through several examples, show that the 'do expression'
is short hand for some more basic primitive operators. These
can be more appropriate to use in some circumstances.
- Conclude with explaining the difference between executing an action
and building a value to execute the action. There is no need to
point out that this is a requirement of being a lazy language.
Instead point out the benefits such a system provides to back up
the claim that Haskell truly is "the world's finest
imperative programming language."
Some points would still be difficult to get through:
- Explaining the type system. There is no avoiding this, and users
will have to learn it.
- Working through the difference between 'unit' and 'void'.
Perhaps this can be avoided in a beginning tutorial. A possible
confusing example in the paper is that of composing putChar twice
while throwing away the result. People used to C or Java might
immediately think "but it doesn't have a result to through away!"
- Some amount of understanding for Haskell expressions is going to be
needed to understand examples. An I/O centric tutorial would want
to explain as things go along as needed.
I would avoid other parts of the paper for a first attempt at some new
tutorial material.
Any thoughts? I could take a first stab at this if people think it would
be a useful direction.
Patrick
From zawrotny@gecko.sb.fsu.edu Fri Jan 5 16:01:13 2001
Date: Fri, 05 Jan 2001 11:01:13 -0500
From: Michael Zawrotny zawrotny@gecko.sb.fsu.edu
Subject: Learning Haskell and FP
"Benjamin L. Russell" <russell@brainlink.com> wrote
> Michael Zawrotny <zawrotny@gecko.sb.fsu.edu> wrote:
[snip]
> >
> > The thing that I would most like to see would entitled "A
> > Practical Guide to Haskell" or something of that nature.
> >
> > [snip]
> >
> > One is tempted to come to the conclusion that Haskell is
> > not
> > suited for "normal" programmers writing "normal"
> > programs.
>
> How would you define a "'normal' programmer writing 'normal' programs?" What
> exactly is a "'normal' program?"
That was sloppy on my part. My message was getting long, so I used
"normal" as a short cut. I should know better after seeing all of
the flamewars about whether or not FP is suitable for "real" work.
What I meant by "normal programmer" was (surprise) someone like myself.
I.e. someone who doesn't have much, if any background in formal logic,
higher mathematics, proofs of various and sundry things, etc.
By "normal program" I meant things like short utility programs that
I might otherwise write in shell, python, perl, etc. or data extraction
and analysis programs that I might write in in python, perl, C or C++
depending on the type of analysis.
> (Perhaps another way of phrasing the issue is as the "declarative" vs. "proce
> dural" distinction, since the issue seems to be that of "what is" (types) vs.
> "how to" (imperative expression; i.e., procedures).)
Although there is some of that, for me at least, the types aren't a
big deal. The main thing for me is figuring out how to actually get
something done. Most of the things I've read have had tons of really
neat things that you can do with functional abstractions, but it's all
... abstract. I would love to see something that is more about getting
things done in a how-to sense, including IO. Much of the material I've
seen tends to either relegate IO and related topics into a small section
near the end (almost as if it were somehow tainted by not being able
to be modelled as a mathematical function), or it goes into it talking
about monads and combinators and things that make no sense to me.
> While I agree that "A Practical Guide to Haskell" would indeed be a suitable
> alternative for programmers from the procedural school of expression, I would
> caution that such an introduction would probably not be suitable for all.
This is very true. I think that there is plenty of material that would
be helpful for an SMLer to learn Haskell, but not much for someone who
was was only familiar with with C or who was only comfortable with FP
to the extent of understanding lambdas, closures and functions as values,
but none of the more esoteric areas of FP. I'm advocating something
along the lines of the "Practical Guide" I mentioned or the "Nutshell"
book below.
> Perhaps, ideally, two separate tutorials (or perhaps a single tutorial with t
> wo sections based on different viewpoints?) may be needed? The difficulty is
> that the conceptual distance between the declarative and procedural schools
> of thought seems too great to be bridged by a single viewpoint. It seems tha
> t any introduction favoring either one would risk alienating the other.
I agree that any single tutorial would be able to target both audiences.
> Personally, I would really prefer "A Gentle Elementary Introduction to Haskel
> l: Elements of the Haskell School of Expression with Practical Examples," bu
> t some would no doubt choose "Haskell in a Nutshell: How to Write Practical
> Programs in Haskell."
I'm definitely the "Nutshell" type. If it were published, I'd buy
it in a heartbeat. That's why it would be good to have both, everyone
would have one that suited their needs.
I'd like to say a couple things in closing, just so that people don't
get the wrong idea. I like Haskell. Even if I never really write any
programs in it, trying to learn it has given me a different way of
thinking about programming as well as exposing me to some new ideas
and generally broadening my horizons. My only problem is that I just
can't seem to get things done with it. Please note that I am not
saying here, nor did I say previously that it isn't possible to do
things in Haskell. Obviously there are a number of people who can.
I am simply saying that I am having trouble doing it. I would also
like to mention that I really appreciate the helpful and informative
tone on the list, especially on a topic that, even though not intended
that way, could be considered critical of Haskell.
Mike
--
Michael Zawrotny
411 Molecular Biophysics Building
Florida State University | email: zawrotny@sb.fsu.edu
Tallahassee, FL 32306-4380 | phone: (850) 644-0069
From erik@meijcrosoft.com Fri Jan 5 17:10:46 2001
Date: Fri, 5 Jan 2001 09:10:46 -0800
From: Erik Meijer erik@meijcrosoft.com
Subject: Learning Haskell and FP
> > But it raises the issue of an appropriate animal mascot
> > for
> > the cover; I can only come up with the Uakari, an
> > exotic-looking
> > rainforest monkey, which sounds similar to "Curry".
> >
> > (look here for a picture:)
> >
> > <http://www.animalsoftherainforest.com/uakari.htm>
Wow, that looks remarkably like me!
Erik
From jans@numeric-quest.com Fri Jan 5 12:36:23 2001
Date: Fri, 5 Jan 2001 07:36:23 -0500 (EST)
From: Jan Skibinski jans@numeric-quest.com
Subject: Learning Haskell and FP
On Fri, 5 Jan 2001, Michael Zawrotny wrote:
>I would love to see something that is more about getting
> things done in a how-to sense, including IO. Much of the material I've
> seen tends to either relegate IO and related topics into a small section
> near the end (almost as if it were somehow tainted by not being able
> to be modelled as a mathematical function), or it goes into it talking
> about monads and combinators and things that make no sense to me.
Aside from variety of excellent theoretical papers on "monads and
combinators and things" there are at least three or four down to
earth attepmts to explain IO in simple terms. Check Haskell Wiki
page for references.
Personally, I like the "envelope" analogy, which is good enough
for all practical IO manipulations: you open IO envelope,
manipulate its contents and return the results in another
envelope. Or you just pass unopened envelope around if you
do not care reading its contents. But you are not allowed to
pollute the rest of the program by throwing unwrapped
"notes" around; they must be always enclosed in IO "envelopes".
This is an example of a "howto recipe", as you asked for.
There are plenty of library code and short applications
around where IO monad is used for simple things, like reading
and writing to files, communicating with external processes
(reading a computer clock or something), exchanging information
with web servers (CGI libraries and similar applications), etc.
Some do data acquisition (like reading arrays from files)
some do nothing more but simply write data out, as in my
GD module.
[This is not a self promotion, this is just the simplest possible
useful IO application that comes to my mind now. Check also
"funpdf" - there are plenty examples of reading, writing and
interacting with a purely functional world.]
Jan
From Doug_Ransom@pml.com Fri Jan 5 18:22:44 2001
Date: Fri, 5 Jan 2001 10:22:44 -0800
From: Doug Ransom Doug_Ransom@pml.com
Subject: Learning Haskell and FP
> -----Original Message-----
> From: Michael Zawrotny [mailto:zawrotny@gecko.sb.fsu.edu]
> Sent: Friday, January 05, 2001 8:01 AM
> To: haskell-cafe@haskell.org
> Subject: Re: Learning Haskell and FP
>
>
> "Benjamin L. Russell" <russell@brainlink.com> wrote
> > Michael Zawrotny <zawrotny@gecko.sb.fsu.edu> wrote:
> [snip]
> > >
> > > The thing that I would most like to see would entitled "A
> > > Practical Guide to Haskell" or something of that nature.
> > >
> > > [snip]
> > >
> > > One is tempted to come to the conclusion that Haskell is
> > > not
> > > suited for "normal" programmers writing "normal"
> > > programs.
> >
> > How would you define a "'normal' programmer writing
> 'normal' programs?" What
> > exactly is a "'normal' program?"
>
> That was sloppy on my part. My message was getting long, so I used
> "normal" as a short cut. I should know better after seeing all of
> the flamewars about whether or not FP is suitable for "real" work.
>
> What I meant by "normal programmer" was (surprise) someone
> like myself.
> I.e. someone who doesn't have much, if any background in
> formal logic,
> higher mathematics, proofs of various and sundry things, etc.
I agree here. Todays software engineers take the tools at their disposal to
make systems as best they can at the lowest price they can. The FP
documentation is not ready -- it is still in the world of academics. The
tools are also not there.
The problem is not that working programmers can not understand necessary
theory to apply techniques. We certainly do not have the time to go through
all sorts of academic papers. We do have the time to take home a text book
and read it over a few weekends. What we need is to be spoon fed the
important knowledge (i.e. in a single textbook). The various Design Pattern
catalogs do this for OO. FP is a little more complicated, but I think there
is much potential to get more work done in the same time if we could learn
to apply it.
>
> By "normal program" I meant things like short utility programs that
> I might otherwise write in shell, python, perl, etc. or data
> extraction
> and analysis programs that I might write in in python, perl, C or C++
> depending on the type of analysis.
For the sake of diversity, a normal program to me inludes everything from
XML processing (which I do a fair bit), database manipulation, delivery of
historical and current (i.e. "real time" or immediate values) to users, and
this one is real key: scalable programs for 3 tiered architectures. The
last one is real interesting. For those familliar with COM+ or MTS (or the
Java equivelent), the middle or business tier typically contains objects
which behave as functions -- when a client calls a middle tier object, the
state of that middle tier object is cruft once the call completes. I think
what is called "business logic" in the "real world" of delivering systems
would be an excellent place to us FP.
>
> > (Perhaps another way of phrasing the issue is as the
> "declarative" vs. "proce
> > dural" distinction, since the issue seems to be that of
> "what is" (types) vs.
> > "how to" (imperative expression; i.e., procedures).)
>
> Although there is some of that, for me at least, the types aren't a
> big deal. The main thing for me is figuring out how to actually get
> something done. Most of the things I've read have had tons of really
> neat things that you can do with functional abstractions, but it's all
> ... abstract. I would love to see something that is more
> about getting
> things done in a how-to sense, including IO. Much of the
> material I've
> seen tends to either relegate IO and related topics into a
> small section
> near the end (almost as if it were somehow tainted by not being able
> to be modelled as a mathematical function), or it goes into it talking
> about monads and combinators and things that make no sense to me.
>
> > While I agree that "A Practical Guide to Haskell" would
> indeed be a suitable
> > alternative for programmers from the procedural school of
> expression, I would
> > caution that such an introduction would probably not be
> suitable for all.
>
> This is very true. I think that there is plenty of material
> that would
> be helpful for an SMLer to learn Haskell, but not much for someone who
> was was only familiar with with C or who was only comfortable with FP
> to the extent of understanding lambdas, closures and
> functions as values,
> but none of the more esoteric areas of FP. I'm advocating something
> along the lines of the "Practical Guide" I mentioned or the "Nutshell"
> book below.
>
> > Perhaps, ideally, two separate tutorials (or perhaps a
> single tutorial with t
> > wo sections based on different viewpoints?) may be needed?
> The difficulty is
> > that the conceptual distance between the declarative and
> procedural schools
> > of thought seems too great to be bridged by a single
> viewpoint. It seems tha
> > t any introduction favoring either one would risk
> alienating the other.
>
> I agree that any single tutorial would be able to target both
> audiences.
>
> > Personally, I would really prefer "A Gentle Elementary
> Introduction to Haskel
> > l: Elements of the Haskell School of Expression with
> Practical Examples," bu
> > t some would no doubt choose "Haskell in a Nutshell: How
> to Write Practical
> > Programs in Haskell."
>
> I'm definitely the "Nutshell" type. If it were published, I'd buy
> it in a heartbeat. That's why it would be good to have both, everyone
> would have one that suited their needs.
>
> I'd like to say a couple things in closing, just so that people don't
> get the wrong idea. I like Haskell. Even if I never really write any
> programs in it, trying to learn it has given me a different way of
> thinking about programming as well as exposing me to some new ideas
> and generally broadening my horizons. My only problem is that I just
> can't seem to get things done with it. Please note that I am not
> saying here, nor did I say previously that it isn't possible to do
> things in Haskell. Obviously there are a number of people who can.
> I am simply saying that I am having trouble doing it. I would also
> like to mention that I really appreciate the helpful and informative
> tone on the list, especially on a topic that, even though not intended
> that way, could be considered critical of Haskell.
>
>
> Mike
>
> --
> Michael Zawrotny
> 411 Molecular Biophysics Building
> Florida State University | email: zawrotny@sb.fsu.edu
> Tallahassee, FL 32306-4380 | phone: (850) 644-0069
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
From wohlstad@cs.ucdavis.edu Sun Jan 7 02:34:54 2001
Date: Sat, 6 Jan 2001 18:34:54 -0800 (PST)
From: Eric Allen Wohlstadter wohlstad@cs.ucdavis.edu
Subject: more HDirect confusion
Things that are popping up when I try to use HDirect:
1. I don't have a "com" directory under "ghc/lib/imports". So, I made one
and copied the files from "hdirect-0.17/lib" into it. I don't know if
that's right.
2. I don't have a "misc" package anywhere. The makefiles complain about
"-syslib misc".
3. I don't have a file called "HScom_imp.a". ld complains about
"-lHScom_imp.a".
I thought I installed everything correctly. I did "make boot", and then
"make", and then "make lib".
Thanks,
Eric
From sebc@posse42.net Mon Jan 8 11:07:17 2001
Date: Mon, 8 Jan 2001 12:07:17 +0100
From: Sebastien Carlier sebc@posse42.net
Subject: Extending the do-notation
> > I'm constantly amazed by the number of tricks one has
> > to know before he can write concise code using the
> > do-notation (among other things, I used to write
> > "x <- return $ m" instead of "let x = m").
> [snip]
> Why do you WANT to write concise code using the do-notation?
> Has someone revived the Obfuscated Haskell Contest, or
> do you find touch-typing difficult?
Which of the following is easier to read (and please forgive
the short variable names) ?
> x <- return $ m
or
> let x = m
> x <- m
> let (a, b) = unzip x
> ... -- (and this code uses an extra variable)
or
> (a, b) <- unzip `liftM` m
Concise does not mean obfuscated. Unnecessarily inflating your code
will not make it more readable. Or am I wrong ?
From simonpj@microsoft.com Mon Jan 8 13:54:09 2001
Date: Mon, 8 Jan 2001 05:54:09 -0800
From: Simon Peyton-Jones simonpj@microsoft.com
Subject: Extending the do-notation
| Another question concerning the do-notation: I noticed
| that most parts of ghc do not use it. Is it because
| the code was written before the notation was available,
| because the do-notation is too weak to express these
| parts, or for another fundamental reason ?
The former: mostly written before do-notation existed.
We're gradually migrating!
Simon
From russell@brainlink.com Mon Jan 8 18:30:44 2001
Date: Mon, 08 Jan 2001 13:30:44 -0500
From: Benjamin L. Russell russell@brainlink.com
Subject: Learning Haskell and FP
On Fri, 5 Jan 2001 10:26:19 -0500 (EST)
Patrick M Doane <patrick@watson.org> wrote:
>
> [snip]
>
> I think a really good beginner's tutorial on I/O could be
> started from this
> paper:
>
> - Start immediately with using the 'do expression' and
> don't
> worry about the history that led to its development.
Actually, the history, especially from a comparative programming languages standpoint, can sometimes be useful for motivation.
For example, many Java textbooks motivated study of the language by explaining the need for a C-style language without explicit memory allocation or explicit pointer casting. Similarly, an on-line document for C# motivated it by explaining the history of how it grew out of a need for a language similar to C and C++ (the document somehow left out the Java comparison :-( ), but that allowed programmers to develop more efficiently in it.
Even for a "Haskell in a Nutshell"-style textbook, a couple of paragraphs comparing Haskell to other languages from a historical viewpoint and describing the advantages and disadvantages of Haskell in particular could prove quite useful.
> [snip]
>
> Many people are also annoyed by an author using new
> vocabulary even
> if it is well defined. It's better to get them
> comfortable with the
> system first.
That depends on which new vocabulary is being mentioned, though. That may true for unnecessary new vocabulary, such as "monads" for the first chapter. However, e.g. in the following example (borrowed from Chapter 3 of _A Gentle Introduction to Haskell, Version 98,_ by Paul Hudak):
add :: Integer -> Integer -> Integer
add x y = x + y
it is hard not to introduce such vocabulary as "has type," "arrow" (or "mapping"), and maybe even "currying."
> [snip]
>
> - Conclude with explaining the difference between
> executing an action
> and building a value to execute the action. There is
> no need to
> point out that this is a requirement of being a lazy
> language.
> Instead point out the benefits such a system
> provides to back up
> the claim that Haskell truly is "the world's finest
> imperative programming language."
Forgive me if I am ignorant, but who claimed that Haskell was an "imperative" language?
Also, in order to take full advantage of Haskell, it would seem necessary to get used to functional programming style (the Haskell school of expression, in particular). It seems that using Haskell as an "imperative" language is a bit like thinking in C when programming in C++; only worse, since the imperative habits are being brought into the functional, rather than the OO, realm.
--Ben
--
Benjamin L. Russell
russell@brainlink.com
benjamin.russell.es.94@aya.yale.edu
"Furuike ya! Kawazu tobikomu mizu no oto." --Matsuo Basho
From erik@meijcrosoft.com Mon Jan 8 20:06:35 2001
Date: Mon, 8 Jan 2001 12:06:35 -0800
From: Erik Meijer erik@meijcrosoft.com
Subject: Learning Haskell and FP
> Forgive me if I am ignorant, but who claimed that Haskell was an
"imperative" language?
>
> Also, in order to take full advantage of Haskell, it would seem necessary
to
> get used to functional programming style (the Haskell school of
expression, in particular).
> It seems that using Haskell as an "imperative" language is a bit like
thinking in C
> when programming in C++; only worse, since the imperative habits are being
>brought into the functional, rather than the OO, realm.
Nope, I also think that Haskell is the world's finest *imperative* language
(and the world's best functional language as well). The beauty of monads is
that you can encapsulate imperative actions as first class values, ie they
have the same status as functions, lists, ... Not many other imperative
languages have statements as first class citizens.
Erik
From theo@engr.mun.ca Mon Jan 8 21:18:14 2001
Date: Mon, 08 Jan 2001 17:48:14 -0330
From: Theodore Norvell theo@engr.mun.ca
Subject: Learning Haskell and FP
Erik Meijer wrote:
> Nope, I also think that Haskell is the world's finest *imperative* language
> (and the world's best functional language as well). The beauty of monads is
> that you can encapsulate imperative actions as first class values, ie they
> have the same status as functions, lists, ... Not many other imperative
> languages have statements as first class citizens.
It may be the only imperative language that doesn't have mutable variables
as a standard part of the language. :-)
I do agree that Haskell has a lot of nice imperative features, but it
is also missing a few that are fundamental to imperative programming.
Personally, I'd love to see a language that is imperative from the
ground up, that has some of the design features of Haskell (especially
the type system), but I don't think that Haskell is that language (yet?).
A question for the list: Is there a book that gives a good introduction
to Hindley-Milner typing theory and practice, and that delves into its
various extensions (e.g. imperative programs, type classes, record types).
I have Mitchell's book out from the library, but it seems a bit limited
with respect to extentions (I think it deals with subtypes, but not
type classes and mutable variables, for example).
Cheers,
Theodore Norvell
----------------------------
Dr. Theodore Norvell theo@engr.mun.ca
Electrical and Computer Engineering http://www.engr.mun.ca/~theo
Engineering and Applied Science Phone: (709) 737-8962
Memorial University of Newfoundland Fax: (709) 737-4042
St. John's, NF, Canada, A1B 3X5
From Tom.Pledger@peace.com Mon Jan 8 22:04:38 2001
Date: Tue, 9 Jan 2001 11:04:38 +1300
From: Tom Pledger Tom.Pledger@peace.com
Subject: Haskell Language Design Questions
Doug Ransom writes:
[...]
> 2. It seems to me that the Maybe monad is a poor substitute for
> exception handling because the functions that raise errors may not
> necessarily support it.
It sometimes helps to write such functions for monads in general,
rather than for Maybe in particular. Here's an example adapted from
the standard List module:
findIndex :: (a -> Bool) -> [a] -> Maybe Int
findIndex p xs = case findIndices p xs of
(i:_) -> Just i
[] -> Nothing
It generalises to this:
findIndex :: Monad m => (a -> Bool) -> [a] -> m Int
findIndex p xs = case findIndices p xs of
(i:_) -> return i
[] -> fail "findIndex: no match"
The price of the generalisation is that you may need to add some type
signatures to resolve any new overloading at the top level of the
module.
Regards,
Tom
From nordland@cse.ogi.edu Mon Jan 8 23:42:16 2001
Date: Mon, 08 Jan 2001 15:42:16 -0800
From: Johan Nordlander nordland@cse.ogi.edu
Subject: Are anonymous type classes the right model at all? (replying to Re:
Are fundeps the right model at all?)
Tom Pledger wrote:
>
> Marcin 'Qrczak' Kowalczyk writes:
> [...]
> > My new record scheme proposal does not provide such lightweight
> > extensibility, but fields can be added and deleted in a controlled
> > way if the right types and instances are made.
>
> Johan Nordlander must be on holiday or something, so I'll deputise for
> him. :-)
No holiday in sight, I'm afraid :-) I just managed to resist the
temptation of throwing in another ad for O'Haskell. But since my name
was brought up...
> O'Haskell also has add-a-field subtyping. Here's the coloured point
> example (from http://www.cs.chalmers.se/~nordland/ohaskell/survey.html):
>
> struct Point =
> x,y :: Float
>
> struct CPoint < Point =
> color :: Color
>
> Regards,
> Tom
Notice though that O'Haskell lacks the ability delete fields, which I
think is what Marcin also proposes. I've avoided such a feature in
O'Haskell since it would make the principal type of an expression
sensitive to future type declarations. For example, assuming we have
f p = p.x ,
its principal type would be Point -> Float if only the type definitions
above are in scope, but OneDimPoint -> Float in another scope where some
type OneDimPoint is defined to be Point with field y deleted.
-- Johan
From schulzs@uni-freiburg.de Wed Jan 10 20:53:51 2001
Date: Wed, 10 Jan 2001 20:53:51 +0000
From: Sebastian Schulz schulzs@uni-freiburg.de
Subject: ANNOUNCE: Draft TOC of Haskell in a Nutshell
"Benjamin L. Russell" wrote:
>
> On Tue, 9 Jan 2001 09:00:27 +0100 (MET)
> Johannes Waldmann <joe@isun.informatik.uni-leipzig.de> wrote:
> >
> > This could be driven to the extreme: not only hide the
> > word "monad",
> > but also "functional". The title would be "Imperative
> > programming in Haskell"
> > (as S. Peyton Jones says in Tackling the Awkward Squad:
> > "Haskell is the world's finest imperative programming
> > language").
>
> Couldn't this choice potentially backfire, though? For example, many people choose Java over C because they prefer OO to straight imperative programming, which they see at The Old Way.
>
> If I went to a bookstore and saw one book entitled, "Imperative Programming in Haskell," and another entitled, "OO Programming in Java," I wouldn't buy the Haskell book, especially if had already had a bad experience with imperative programming in C.
>
> How about, "The Post-OO Age: Haskell: Back to the Future in Imperative Programming"?
I didn`t follow this discussion very closely, but:
Hey! What`s so evil in the word "functional"??!
Haskell was the first language I learned (to love;-) and for me it's
more difficult to think imperative (e.g. when I have to do some homework
in Java).
In that bookstore, I would buy a book "Functional Programming in Java"
:) .
But serious, I don`t think that it is good to hide the fact that Haskell
is a functional Language. Nobody will realize how comfortable and
elegant the functional way is, when he is still thinking: "Wow, how
complicate to program imperative with this functional syntax".
regards
Sebastian
From mpj@cse.ogi.edu Wed Jan 10 20:08:17 2001
Date: Wed, 10 Jan 2001 12:08:17 -0800
From: Mark P Jones mpj@cse.ogi.edu
Subject: Hugs
| I've currently installed Hugs on my PC, could you tell me how
| I can configure Hugs to use an editor. The editor I have got
| installed on my computer is winedt.
This question is answered in the Hugs manual, Section 4 (or
pages 11-13 in the pdf version).
Please note also that questions that are about using Hugs should
be sent to the hugs-users mailing list, and not to the Haskell
list.
All the best,
Mark
From kort@wins.uva.nl Mon Jan 15 19:24:08 2001
Date: Mon, 15 Jan 2001 20:24:08 +0100
From: Jan Kort kort@wins.uva.nl
Subject: gui building in haskell
Matthew Liberty wrote:
>
> Greetings,
>
> I've been looking at http://www.haskell.org/libraries/#guis and trying
> to figure out which package is "best" for building a gui. Can anyone
> give a comparison of the strengths/weaknesses of these packages (or any
> others)?
Hi Matt,
When Manuel's Haskell GTK+ binding (gtkhs) is finished it will
be really cool.
On top of gtkhs there are/will be many other libraries and tools:
- iHaskell: a high level GUI library that avoids the eventloop
mess.
- GtkGLArea: 3D graphics in your GTK+ application using Sven's HOpenGL.
- GUI painter: All you need is a backend for Glade. I'm currently
working on this. Or at least I was half a year ago, other (even
more interesting) things claimed most of my spare time.
Of course this will all take a while to come to a beta release.
FranTk is currently the only high level interface that
is in beta. There is no release for ghc4.08, but it
takes only a day or so to get it working.
In short: use FranTk now or wait a year and use gtkhs.
The reason there are so many GUI libraries on the web page is
that nothing ever gets removed. Most of the libraries are
no longer supported.
I only realized after making the table that it was a bit
redundant, but since I spend 10 mins getting the stuff
lined up properly (please don't tell me you have a
proportional font) you'll have to suffer through it:
+------------+--------+-------+-----+------+------+
| GUI lib | Status | Level | W98 | Unix | OS X |
+------------+--------+-------+-----+------+------+
| TclHaskell | beta | low | yes | yes | |
| Fudgets | alpha*)| high | no | yes | |
| gtkhs | alpha | low | | yes | |
| iHaskell | | high | | | |
| FranTk | beta | high | yes | yes | |
+------------+--------+-------+-----+------+------+
| Haggis | dead | high | no | yes | |
| Haskell-Tk | dead | | | | |
| Pidgets | dead | | | | |
| Budgets | dead | | | | |
| EmbWin | dead | | | | |
| Gadgets | dead | | | | |
+------------+--------+-------+-----+------+------+
*) I thought Fudgets was dead, but apparently it has been revived.
You might want to check that out too, although it has the words
"hackers release" all over it (literaly).
Status: alpha/beta
I guess "dead" sounds a bit rude. It means that I couldn't
find a distribution or it would take too much effort to get
it working.
Level: low: Raw calls to C library.
high: More functional style.
W98: Wether it works on Windows 95/98/ME with ghc4.08.
Unix: Same for Unix (Solaris in my case).
OS X: Same for Macintosh OS X.
Hope this helps,
Jan
From qrczak@knm.org.pl Mon Jan 15 19:49:05 2001
Date: 15 Jan 2001 19:49:05 GMT
From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl
Subject: Are fundeps the right model at all?
Thanks for the reply!
Mon, 15 Jan 2001 02:01:18 -0800, Mark P Jones <mpj@cse.ogi.edu> pisze:
> 1) "A weaker notion of ambiguity" (title of Section 5.8.3 in my dissertation,
> which is where I think the following idea originated): We can modify the
> definition of ambiguity for certain kinds of class constraint by considering
> (for example) only certain subsets of parameters. In this setting, a type
> P => t is ambiguous if and only if there is a variable in AV(P) that is not
> in t. The AV function returns the set of potentially ambiguous variables in
> the predicates P. It is defined so that AV(Eq t) = TV(t), but also can
> accommodate things like AV(Has_field r a) = TV(r). A semantic justification
> for the definition of AV(...) is needed in cases where only some parameters
> are used; this is straightforward in the case of Has_field-like classes.
> Note that, in this setting, the only thing that changes is the definition
> of an ambiguous type.
This is exactly what I have in mind.
Perhaps extended a bit, to allow multiple values of AV(P). During
constraint solving unambiguous choices for the value of AV(P) must
unify to a common answer, but ambiguous choices are not considered
(or something like that).
I don't see a concrete practical example for this extension yet, so
it may be an unneeded complication, but I can imagine two parallel
sets of types with a class expressing a bijection between them:
a type from either side is enough to determine the other. This is
what fundeps can do, basic (1) cannot, but extended (1) can - with a
different treatment of polymorphic types than fundeps, i.e. allowing
some functions which are not necessarily bijections.
> - When a user writes an instance declaration:
>
> instance P => C t1 ... tn where ...
>
> you treat it, in the notation of (2) above, as if they'd written:
>
> instance P => C t1 ... tn
> improves C t11 ... t1n, ..., C tm1 ... tmn where ...
>
> Here, m is the number of keys, and: tij = ti, if parameter i is in key j
> = ai, otherwise
> where a1, ..., an are distinct new variables.
Sorry, I don't understand (2) well enough to see if this is the case.
Perhaps it is.
> - Keys will not give you the full functionality of functional dependencies,
> and that missing functionality is important in some cases.
And vice versa. Plain fundeps can't allow both
Has_parens r a => r
as an unambiguous type and
Has_parens TokenParser (Parser a -> Parser a)
as an instance.
> PS. If we're going to continue this discussion any further, let's
> take it over into the haskell-cafe ...
OK.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From ashley@semantic.org Mon Jan 15 21:02:53 2001
Date: Mon, 15 Jan 2001 13:02:53 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: gui building in haskell
At 2001-01-15 11:24, Jan Kort wrote:
>When Manuel's Haskell GTK+ binding (gtkhs) is finished it will
>be really cool.
>
>On top of gtkhs there are/will be many other libraries and tools:
>- iHaskell: a high level GUI library that avoids the eventloop
> mess.
>- GtkGLArea: 3D graphics in your GTK+ application using Sven's HOpenGL.
>- GUI painter: All you need is a backend for Glade. I'm currently
> working on this. Or at least I was half a year ago, other (even
> more interesting) things claimed most of my spare time.
Would some kind Haskell-to-Java bridge be a cost-effective way of
providing a multi-platform GUI library, as well as network, SQL, RMI
etc., etc.?
It doesn't necessarily imply compiling to the JVM. Java could simply see
the compiled Haskell code through the JNI.
--
Ashley Yakeley, Seattle WA
From kort@wins.uva.nl Tue Jan 16 10:48:07 2001
Date: Tue, 16 Jan 2001 11:48:07 +0100
From: Jan Kort kort@wins.uva.nl
Subject: gui building in haskell
Ashley Yakeley wrote:
> Would some kind Haskell-to-Java bridge be a cost-effective way of
> providing a multi-platform GUI library, as well as network, SQL, RMI
> etc., etc.?
>
> It doesn't necessarily imply compiling to the JVM. Java could simply see
> the compiled Haskell code through the JNI.
That sounds unlikely to me, how do you overide methods through JNI ?
The only way I can see this working is the way Mondrian does it:
make a more object oriented Haskell and compile to Java. I don't
think Mondrian is anywhere near an alpha release though.
Jan
From elke.kasimir@catmint.de Tue Jan 16 13:48:41 2001
Date: Tue, 16 Jan 2001 14:48:41 +0100 (CET)
From: Elke Kasimir elke.kasimir@catmint.de
Subject: gui building in haskell
On 16-Jan-2001 Jan Kort wrote:
(interesting stuff snipped...)
> Ashley Yakeley wrote:
>> Would some kind Haskell-to-Java bridge be a cost-effective way of
>> providing a multi-platform GUI library, as well as network, SQL, RMI
>> etc., etc.?
>>
>> It doesn't necessarily imply compiling to the JVM. Java could simply see
>> the compiled Haskell code through the JNI.
>
> That sounds unlikely to me, how do you overide methods through JNI ?
> The only way I can see this working is the way Mondrian does it:
> make a more object oriented Haskell and compile to Java. I don't
> think Mondrian is anywhere near an alpha release though.
Aside: I often think that the Java-GUI, SQL and etc. stuff is also anywhere
near an alpha release ...
Best,
Elke
>
> Jan
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
---
"If you have nothing to say, don't do it here..."
Elke Kasimir
Skalitzer Str. 79
10997 Berlin (Germany)
fon: +49 (030) 612 852 16
mail: elke.kasimir@catmint.de>
see: <http://www.catmint.de/elke>
for pgp public key see:
<http://www.catmint.de/elke/pgp_signature.html>
From C.Reinke@ukc.ac.uk Tue Jan 16 16:11:53 2001
Date: Tue, 16 Jan 2001 16:11:53 +0000
From: C.Reinke C.Reinke@ukc.ac.uk
Subject: Too Strict?
Dominic,
> What I can't see at the moment is how to keep what I was doing modular. I had
> a module Anonymize, the implementation of which I wanted to change without
> the user of it having to change their code. The initial implementation was a
> state monad which generated a new string every time it needed one but if it
> was a string it had already anonymized then it looked it up in the state. I
> initially used a list but with 100,000+ strings it took a long time. The next
> implementation used FiniteMap which improved things considerably. I only had
> to make three changes in Anonymize and none in Main. Using MD5 is quicker
> still but isn't so good from the program maintenance point of view.
my first stab at the modularity issue was the version _2 in my last message.
Looking back at the Anonymizable class and instances in your full program,
type Anon a = IO a
class Anonymizable a where
anonymize :: a -> Anon a
-- MyString avoids overlapping instances of Strings
-- with the [Char]
data MyString = MyString String
deriving Show
instance Anonymizable MyString where
anonymize (MyString x)
= do s <- digest x
return ((MyString . showHex') s)
instance Anonymizable a => Anonymizable [a] where
anonymize xs = mapM anonymize xs
the problem is in the Anonymizable instance for [a]: the mapM in anonymize
constructs an IO script, consisting of some IO operation for each list element,
all chained together into a monolithic whole.
As IO a is an abstract type, this is a bit too restrictive to be modular: if I
ever want any of the anonymized Strings, I can only get a script that
anonymizes them all - before executing that script, I don't have any anonymized
Strings, and after executing the script, all of them have been processed.
This forestalls any attempt to interleave the anonymization with some further
per-element processing. Instead, I would prefer to have a list of IO actions,
not yet chained together (after all, in Haskell, they are just data items), but
that doesn't fit the current return type of anonymize. One approach would be
to change the type of Anon a to [IO a], or to ignore the [a] instance and use
the MyString instance only, but the longer I look at the code, the less I'm
convinced that the overloading is needed at all.
Unless there are other instances of Anonymizable, why not simply have a
function anonymize :: String -> Anon String ? That would still allow you to
hide the implementation decisions you mentioned (even in a separate module),
provided that any extra state you need can be kept in the IO monad.
One would have to write mapM anonymize explicitly where you had simply
anonymize, but it would then become possible to do something else with the list
of IO actions before executing them (in this case, to interleave the printing
with the anonymization).
First, here is the interesting fragment with the un-overloaded anonymize:
readAndWriteAttrVals =
do h <- openFile fileout WriteMode
s <- readFile filename
a <- mapM anonymize (lines s)
hPutStr h (unlines a)
It is now possible to import anonymize from elsewhere and do the interleaving
in the code that uses anonymize:
readAndWriteAttrVals =
do h <- openFile fileout WriteMode
s <- readFile filename
let action line = do
{ a <- anonymize l
; hPutStr h a
}
mapM_ action (lines s)
Would that work for your problem? Alternatively, if some of your implementation
options require initialization or cleanup, your Anonymize module could offer a
function to process all lines, with a hook for per-line processing:
processLinesWith perLineAction ls =
do
{ initialize
; as <- mapM action ls
; cleanup
; return as
}
where
action l = do { a <- anonymize l ; perLineAction a }
Then the code in the client module could simply be:
readAndWriteAttrVals =
do h <- openFile fileout WriteMode
s <- readFile filename
processLinesWith (hPutStr h) (lines s)
return ()
Closing the loop, one could now redefine the original, overloaded anonymize to
take a perLineAction, with the obvious instances for MyString and [a], but I
really don't see why every function should have to be called anonymize?-)
Claus
PS The simplified code of the new variant, for observation:
module Main(main) where
import Observe
import IO(openFile,
hPutStr,
IOMode(ReadMode,WriteMode,AppendMode))
filename = "ldif1.txt"
fileout = "ldif.out"
readAndWriteAttrVals =
do h <- openFile fileout WriteMode
s <- readFile filename
let { anonymize s = return (':':s)
; action l = do
{ a <- anonymize l
; hPutStr h a
}
}
mapM_ (observe "action" action) (lines s)
main = runO readAndWriteAttrVals
From ashley@semantic.org Tue Jan 16 21:13:30 2001
Date: Tue, 16 Jan 2001 13:13:30 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: gui building in haskell
At 2001-01-16 02:48, Jan Kort wrote:
>Ashley Yakeley wrote:
>> Would some kind Haskell-to-Java bridge be a cost-effective way of
>> providing a multi-platform GUI library, as well as network, SQL, RMI
>> etc., etc.?
>>
>> It doesn't necessarily imply compiling to the JVM. Java could simply see
>> the compiled Haskell code through the JNI.
>
>That sounds unlikely to me, how do you overide methods through JNI ?
You create a stub Java class with methods declared 'native'. Anything you
can do in Java, you can do in the JNI.
--
Ashley Yakeley, Seattle WA
From Tom.Pledger@peace.com Tue Jan 16 22:04:40 2001
Date: Wed, 17 Jan 2001 11:04:40 +1300
From: Tom Pledger Tom.Pledger@peace.com
Subject: O'Haskell OOP Polymorphic Functions
Ashley Yakeley writes:
> At 2001-01-16 13:18, Magnus Carlsson wrote:
>
> >f1 = Just 3
> >f2 = f3 = f4 = Nothing
>
> So I've declared b = d, but 'theValue b' and 'theValue d' are different
> because theValue is looking at the static type of its argument?
>
> What's to stop 'instance TheValue Base' applying in 'theValue d'?
The subtyping (struct Derived < Base ...) makes the two instances
overlap, with 'instance TheValue Derived' being strictly more specific
than 'instance TheValue Base'. If the system preferred the less
specific one, the more specific one would never be used.
This is quite similar to the way overlapping instances are handled
when they occur via substitutions for type variables (e.g. 'instance C
[Char]' is strictly more specific than 'instance C [a]') in
implementations which support than language extension.
Regards,
Tom
From ashley@semantic.org Tue Jan 16 22:20:50 2001
Date: Tue, 16 Jan 2001 14:20:50 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: O'Haskell OOP Polymorphic Functions
At 2001-01-16 14:04, Tom Pledger wrote:
>The subtyping (struct Derived < Base ...) makes the two instances
>overlap, with 'instance TheValue Derived' being strictly more specific
>than 'instance TheValue Base'. If the system preferred the less
>specific one, the more specific one would never be used.
>
>This is quite similar to the way overlapping instances are handled
>when they occur via substitutions for type variables (e.g. 'instance C
>[Char]' is strictly more specific than 'instance C [a]') in
>implementations which support than language extension.
Subtyping-overlapping is quite different from type-substitution
overlapping.
Consider:
struct B
struct D1 < Base =
a1 :: Int
struct D2 < Base =
a2 :: Int
class TheValue a where theValue :: a -> Int
instance TheValue B where theValue _ = 0
instance TheValue D1 where theValue _ = 1
instance TheValue D2 where theValue _ = 2
struct M < D1,D2
m = struct
a1 = 0
a2 = 0
f = theValue m
What's the value of f?
--
Ashley Yakeley, Seattle WA
From Tom.Pledger@peace.com Tue Jan 16 22:36:06 2001
Date: Wed, 17 Jan 2001 11:36:06 +1300
From: Tom Pledger Tom.Pledger@peace.com
Subject: O'Haskell OOP Polymorphic Functions
Ashley Yakeley writes:
[...]
> Subtyping-overlapping is quite different from type-substitution
> overlapping.
Different, but with some similarities.
> Consider:
>
> struct B
>
> struct D1 < Base =
> a1 :: Int
>
> struct D2 < Base =
> a2 :: Int
>
> class TheValue a where theValue :: a -> Int
> instance TheValue B where theValue _ = 0
> instance TheValue D1 where theValue _ = 1
> instance TheValue D2 where theValue _ = 2
>
> struct M < D1,D2
>
> m = struct
> a1 = 0
> a2 = 0
>
> f = theValue m
>
> What's the value of f?
Undefined, because neither of the overlapping instances is strictly
more specific than the other. I hope that would cause a static error,
anywhere that 'instance TheValue D1', 'instance TheValue D2', and
'struct M' are all in scope together.
Here's a similar example using type-substitution overlapping:
instance TheValue Char where ...
instance Monad m => TheValue (m Char) where ...
instance TheValue a => TheValue (Maybe a) where ...
trouble = theValue (Just 'b')
Regards,
Tom
From ashley@semantic.org Tue Jan 16 22:52:35 2001
Date: Tue, 16 Jan 2001 14:52:35 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: O'Haskell OOP Polymorphic Functions
At 2001-01-16 14:36, Tom Pledger wrote:
>Here's a similar example using type-substitution overlapping:
>
> instance TheValue Char where ...
> instance Monad m => TheValue (m Char) where ...
> instance TheValue a => TheValue (Maybe a) where ...
>
> trouble = theValue (Just 'b')
Apparently this is not good Haskell syntax. I tried compiling this in
Hugs:
class TheValue a where theValue :: a -> Int
instance TheValue Char where theValue _ = 0
instance (Monad m) => TheValue (m Char) where theValue _ = 1 --
error here
instance (TheValue a) => TheValue (Maybe a) where theValue _ = 2
trouble = theValue (Just 'b')
I got a syntax error:
(line 3): syntax error in instance head (variable expected)
--
Ashley Yakeley, Seattle WA
From chak@cse.unsw.edu.au Tue Jan 16 23:33:30 2001
Date: Tue, 16 Jan 2001 23:33:30 GMT
From: Manuel M. T. Chakravarty chak@cse.unsw.edu.au
Subject: gui building in haskell
Jan Kort <kort@wins.uva.nl> wrote,
> +------------+--------+-------+-----+------+------+
> | GUI lib | Status | Level | W98 | Unix | OS X |
> +------------+--------+-------+-----+------+------+
[..]
> | gtkhs | alpha | low | | yes | |
[..]
Given the current cross-platform efforts of GTK+, I think,
this is overly pessimistic. Currently, GTK+ runs at least
also on BeOS, there is also a Win98 version, and in addition
to the X interface on Unix, there is now also direct support
for the Linux framebuffer device.
Gtk+HS has only been tested on Unix, but it doesn't contain
anything Unix specific, I think - except that it makes use
of the usual GNU build tools like autoconf and gmake.
Cheers,
Manuel
From fruehr@willamette.edu Wed Jan 17 00:38:13 2001
Date: Tue, 16 Jan 2001 16:38:13 -0800
From: Fritz Ruehr fruehr@willamette.edu
Subject: Learning Haskell and FP
Erik Meijer said:
> Not many other imperative languages have statements as first class citizens.
I don't have the details here (e.g., an Algol 68 report), but Michael Scott
reports in his "Programming Language Pragmatics" text (p. 279) that:
"Algol 68 [allows], in essence, indexing into an array of statements,
but the syntax is rather cumbersome."
This is in reference to historical variations on switch-like statements (and
consistent with a running theme, typical in PL texts, about the extremes of
orthogonality found in Algol 68).
-- Fritz Ruehr
fruehr@willamette.edu
From jf15@hermes.cam.ac.uk Wed Jan 17 12:23:46 2001
Date: Wed, 17 Jan 2001 12:23:46 +0000 (GMT)
From: Jon Fairbairn jf15@hermes.cam.ac.uk
Subject: Learning Haskell and FP
On Tue, 16 Jan 2001, Fritz Ruehr wrote:
> Erik Meijer said:
>=20
> > Not many other imperative languages have statements as first class citi=
zens.
>=20
> I don't have the details here (e.g., an Algol 68 report), but Michael Sco=
tt
> reports in his "Programming Language Pragmatics" text (p. 279) that:
>=20
> "Algol 68 [allows], in essence, indexing into an array of statements,
> but the syntax is rather cumbersome."
Well, there are two ways it allows this.
1) The case statement is little more than array indexing
case <int>
in <stmt1>,
<stmt2>,
...
out <other statement>
esac
2) You can create an array of procedures returning void
results, for which if I remember correctly you have to write
VOID: <stmt>
to turn the <stmt> into a proc void. You can certainly
index an array of these and the relevant proc will be called
as soon as the indexing happens (you don't need to write
() or anything).
So (VOID: print ("a"), VOID: print ("b"))[2] would print
"b". I can't remember if you need to specify the type of the
array, though.
The statements aren't first class, though, because their
scope is restricted by the scope of variables that they
reference. So
begin [10] proc void x; # declare an array of procs #
begin int n :=3D 42;
x[1] :=3D void: (print (n))
end;
x[1]
end
is invalid because at x[1] it would call the procedure,
which would refer to n, which is out of scope (and quite
possibly because of sundry syntax errors!). So Algol 68
isn't a counterexample to Erik's claim.
> This is in reference to historical variations on switch-like statements (=
and
> consistent with a running theme, typical in PL texts, about the extremes =
of
> orthogonality found in Algol 68).
Although if they'd really wanted to be extreme they could
have left out integer case clauses, because they are the
same as indexing a [] proc void!
J=F3n
--=20
J=F3n Fairbairn Jon.Fairbairn@cl.cam.ac.uk
31 Chalmers Road jf@cl.cam.ac.uk
Cambridge CB1 3SZ +44 1223 570179 (pm only, please)
From uk1o@rz.uni-karlsruhe.de Wed Jan 17 23:35:28 2001
Date: Thu, 18 Jan 2001 00:35:28 +0100
From: Hannah Schroeter uk1o@rz.uni-karlsruhe.de
Subject: Will Haskell be commercialized in the future?
Hello!
[a somewhat older mail]
On Mon, Nov 27, 2000 at 08:53:05PM +0100, Michal Gajda wrote:
> [...]
> I often use Haskell in imperative style(for example writting a toy
> [...]
I also do. Perhaps not super-often, but more than once.
- A program to interface to BSD's /dev/tun* and/or /dev/bpf* to
simulate network links with losses/delays (configurable).
~150 lines of C + ~ 1700 lines of Haskell with some GHC extensions.
One efficiency optimization was that I used network buffers
constructed out of MutableByteArray#s together with some
administrative information (an offset into the MBA to designate
the real packet start - necessary to leave room for in-place header
prefixing / removal, the real packet length, the buffer length ...).
Written single threaded with manual select calls (no conc Haskell).
- HTTP testers (two of them with slightly different tasks).
- file generators/translators (diverse, e.g. a generator for test
scripts for one of the above-named HTTP tester)
Of course, the latter often are something like
foo <- readFile bar
let baz = process_somehow foo
writeFile blurb baz
Kind regards,
Hannah.
From uk1o@rz.uni-karlsruhe.de Thu Jan 18 00:00:18 2001
Date: Thu, 18 Jan 2001 01:00:18 +0100
From: Hannah Schroeter uk1o@rz.uni-karlsruhe.de
Subject: GHCi, Hugs (was: Mixture ...)
Hello!
On Mon, Dec 18, 2000 at 02:07:43AM -0800, Simon Peyton-Jones wrote:
> | It'd be good if left-behind tools were using a BSD(-like) licence, in
> | the event that anybody - commercial or otherwise - who wanted to pick
> | them up again, were free to do so.
> GHC does indeed have a BSD-like license.
Except e.g. for the gmp included in the source and the runtime system.
So more precisely: The non-third-party part of GHC has a BSD-like license,
together with *some* third-party parts, maybe.
Kind regards,
Hannah.
From nordland@cse.ogi.edu Thu Jan 18 00:07:14 2001
Date: Wed, 17 Jan 2001 16:07:14 -0800
From: Johan Nordlander nordland@cse.ogi.edu
Subject: O'Haskell OOP Polymorphic Functions
Ashley Yakeley wrote:
>
> OK, I've figured it out. In this O'Haskell statement,
>
> > struct Derived < Base =
> > value :: Int
>
> ...Derived is not, in fact, a subtype of Base. Derived and Base are
> disjoint types, but an implicit map of type "Derived -> Base" has been
> defined.
>
> --
> Ashley Yakeley, Seattle WA
Well, they are actually subtypes, as far as no implicit mapping needs to be
defined. But since Derived and Base also are two distinct type constructors,
the overloading system treats them as completely unrelated types (which is fine,
in general). To this the upcoming O'Hugs release adds the capability of using
an instance defined for a (unique, smallest) supertype of the inferred type, in
case an instance for the inferred type is missing. This all makes a system that
is very similar to the overlapping instances extension that Tom mentioned.
-- Johan
From uk1o@rz.uni-karlsruhe.de Thu Jan 18 00:10:54 2001
Date: Thu, 18 Jan 2001 01:10:54 +0100
From: Hannah Schroeter uk1o@rz.uni-karlsruhe.de
Subject: Boolean Primes Map (continued)
Hello!
On Fri, Dec 22, 2000 at 05:58:56AM +0200, Shlomi Fish wrote:
> primes :: Int -> [Int]
> primes how_much = (iterate 2 initial_map) where
> initial_map :: [Bool]
> initial_map = (map (\x -> True) [ 0 .. how_much])
> iterate :: Int -> [Bool] -> [Int]
> iterate p (a:as) | p > mybound = process_map p (a:as)
> | a = p:(iterate (p+1) (mymark (p+1) step (2*p) as))
> | (not a) = (iterate (p+1) as) where
> step :: Int
> step = if p == 2 then p else 2*p
> mymark :: Int -> Int -> Int -> [Bool] -> [Bool]
> mymark cur_pos step next_pos [] = []
> mymark cur_pos step next_pos (a:as) =
> if (cur_pos == next_pos) then
> False:(mymark (cur_pos+1) step (cur_pos+step) as)
> else
> a:(mymark (cur_pos+1) step next_pos as)
> mybound :: Int
> mybound = ceiling(sqrt(fromIntegral(how_much)))
> process_map :: Int -> [Bool] -> [Int]
> process_map cur_pos [] = []
> process_map cur_pos (a:as) | a = cur_pos:(process_map (cur_pos+1) as)
> | (not a) = (process_map (cur_pos+1) as)
This is buggy.
hannah@mamba:~/src/haskell $ ./primes3 100
[2,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101]
51 primes found.
Correct result is:
hannah@mamba:~/src/haskell $ ./primes0 100
[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97]
25 primes found.
And it's much slower than your previous, correct variant as well as
my just-mailed variant.
Kind regards,
Hannah.
From ashley@semantic.org Thu Jan 18 00:37:01 2001
Date: Wed, 17 Jan 2001 16:37:01 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: O'Haskell OOP Polymorphic Functions
At 2001-01-17 16:07, Johan Nordlander wrote:
>Ashley Yakeley wrote:
>>
>> OK, I've figured it out. In this O'Haskell statement,
>>
>> > struct Derived < Base =
>> > value :: Int
>>
>> ...Derived is not, in fact, a subtype of Base. Derived and Base are
>> disjoint types, but an implicit map of type "Derived -> Base" has been
>> defined.
>>
>> --
>> Ashley Yakeley, Seattle WA
>
>Well, they are actually subtypes, as far as no implicit mapping needs to be
>defined. But since Derived and Base also are two distinct type constructors,
>the overloading system treats them as completely unrelated types (which is
>fine, in general).
All O'Haskell treats them as completely unrelated types. In fact, this
O'Haskell...
> struct Base =
> b1 :: Int
> b2 :: Char
>
> struct Derived < Base =
> d1 :: Int
...is a kind of syntactic sugar for this Haskell...
> data Base = Base (Int,Char)
> dotb1 (Base (x,_)) = x
> dotb2 (Base (_,x)) = x
>
> data Derived = Derived (Int,Char,Int)
> dotd1 (Derived (_,_,x)) = x
>
> implicitMap (Derived (b1,b2,d1)) = Base (b1,b2)
This seems to be stretching the concept of 'subtype'.
Sorry if I sound so bitter and disappointed. I was hoping for a Haskell
extended with real dynamic subtyping...
--
Ashley Yakeley, Seattle WA
From lennart@mail.augustsson.net Thu Jan 18 01:03:14 2001
Date: Wed, 17 Jan 2001 20:03:14 -0500
From: Lennart Augustsson lennart@mail.augustsson.net
Subject: O'Haskell OOP Polymorphic Functions
Ashley Yakeley wrote:
> This seems to be stretching the concept of 'subtype'.
I don't think so, this is the essence of subtyping.
> Sorry if I sound so bitter and disappointed. I was hoping for a Haskell
> extended with real dynamic subtyping...
You seem to want dynamic type tests. This is another feature, and
sometimes a useful one. But it requires carrying around types at
runtime.
You might want to look at existential types; it is a similar feature.
-- Lennart
From wli@holomorphy.com Thu Jan 18 23:57:46 2001
Date: Thu, 18 Jan 2001 15:57:46 -0800
From: William Lee Irwin III wli@holomorphy.com
Subject: A simple problem
Moving over to haskell-cafe...
At 2001-01-18 05:16, Saswat Anand wrote:
>> fun 3 --gives error in Hugs
>> fun (3::Integer) -- OK
>>
>> I am a building an embedded language, so don't want user to cast. Is
>> there a solution?
On Thu, Jan 18, 2001 at 03:38:10PM -0800, Ashley Yakeley wrote:
> 3 is not always an Integer. It's of type "(Num a) => a".
> I couldn't find a way to say that every Num is a C.
I didn't try to say every Num is C, but I found a way to make his
example work:
class C a where
fun :: a -> Integer
instance Integral a => C a where
fun = toInteger . succ
One has no trouble whatsoever with evaluating fun 3 with this instance
defined instead of the original. I'm not sure as to the details, as I'm
fuzzy on the typing derivations making heavy use of qualified types. Is
this the monomorphism restriction biting us again?
Cheers,
Bill
--
<nrut> how does one decide if something is undecidable?
<Galois> carefully
--
From ashley@semantic.org Fri Jan 19 00:04:01 2001
Date: Thu, 18 Jan 2001 16:04:01 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: A simple problem
At 2001-01-18 15:57, William Lee Irwin III wrote:
>class C a where
> fun :: a -> Integer
>
>instance Integral a => C a where
> fun = toInteger . succ
Gives "syntax error in instance head (constructor expected)" at the
'instance' line in Hugs. Is there a option I need to turn on or something?
--
Ashley Yakeley, Seattle WA
From wli@holomorphy.com Fri Jan 19 00:07:39 2001
Date: Thu, 18 Jan 2001 16:07:39 -0800
From: William Lee Irwin III wli@holomorphy.com
Subject: A simple problem
At 2001-01-18 15:57, William Lee Irwin III wrote:
>>class C a where
>> fun :: a -> Integer
>>
>>instance Integral a => C a where
>> fun = toInteger . succ
>
On Thu, Jan 18, 2001 at 04:04:01PM -0800, Ashley Yakeley wrote:
> Gives "syntax error in instance head (constructor expected)" at the
> 'instance' line in Hugs. Is there a option I need to turn on or something?
Yes, when hugs is invoked, invoke it with the -98 option to turn off the
strict Haskell 98 compliance.
Cheers,
Bill
From Tom.Pledger@peace.com Fri Jan 19 00:34:04 2001
Date: Fri, 19 Jan 2001 13:34:04 +1300
From: Tom Pledger Tom.Pledger@peace.com
Subject: A simple problem
Ashley Yakeley writes:
> At 2001-01-18 15:38, I wrote:
>
> >3 is not always an Integer. It's of type "(Num a) => a".
>
> Of course, it would be nice if 3 were an Integer, and Integer were a
> subtype of Real. I haven't come across a language that does this, where
> for instance 3.0 can be cast to Integer (because it is one) but 3.1
> cannot be.
A cast in that direction - becoming more specific - would be nicely
typed as:
Real -> Maybe Integer
or with the help of the fail and return methods:
Monad m => Real -> m Integer
From simonmar@microsoft.com Wed Jan 17 10:39:53 2001
Date: Wed, 17 Jan 2001 02:39:53 -0800
From: Simon Marlow simonmar@microsoft.com
Subject: {-# LINE 100 "Foo.hs #-} vs. # 100 "Foo.hs"
> Indeed. Or do you want to tell me that you are going to
> break one of my favourite programs?
>=20
> [ code deleted ]
> # 111 "Foo.hs"
Actually the cpp-style pragma is only recognised if the '#' is in the
leftmost column and is followed by optional spaces and a digit. It's
quite hard to write one of these in a legal Haskell program, but not
impossible.
Simon
From qrczak@knm.org.pl Fri Jan 19 07:54:34 2001
Date: 19 Jan 2001 07:54:34 GMT
From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl
Subject: {-# LINE 100 "Foo.hs #-} vs. # 100 "Foo.hs"
Wed, 17 Jan 2001 02:39:53 -0800, Simon Marlow <simonmar@microsoft.com> pisze:
> Actually the cpp-style pragma is only recognised if the '#' is in the
> leftmost column and is followed by optional spaces and a digit. It's
> quite hard to write one of these in a legal Haskell program, but not
> impossible.
It's enough to change Manuel's program to use {;} instead of layout.
But it won't happen in any real life program.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From chak@cse.unsw.edu.au Fri Jan 19 10:45:55 2001
Date: Fri, 19 Jan 2001 10:45:55 GMT
From: Manuel M. T. Chakravarty chak@cse.unsw.edu.au
Subject: {-# LINE 100 "Foo.hs #-} vs. # 100 "Foo.hs"
qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) wrote,
> Wed, 17 Jan 2001 02:39:53 -0800, Simon Marlow <simonmar@microsoft.com> pisze:
>
> > Actually the cpp-style pragma is only recognised if the '#' is in the
> > leftmost column and is followed by optional spaces and a digit. It's
> > quite hard to write one of these in a legal Haskell program, but not
> > impossible.
>
> It's enough to change Manuel's program to use {;} instead of layout.
> But it won't happen in any real life program.
A dangerous statement. For example, automatically generated
code often contains quite strange code. If something breaks
the standard, it breaks the standard.
Cheers,
Manuel
From tweed@compsci.bristol.ac.uk Tue Jan 23 18:23:18 2001
Date: Tue, 23 Jan 2001 18:23:18 +0000 (GMT)
From: D. Tweed tweed@compsci.bristol.ac.uk
Subject: Specifications of 'any', 'all', 'findIndices'
On Tue, 23 Jan 2001, Mark Tullsen wrote:
> Johannes Waldmann wrote:
> > ...
> > I'd rather write clear code, than worry about efficiency too early.
> > Who said this, "premature optimization is the root of all evil".
>
> I've always attributed this to Donald Knuth:
>
> Premature optimization is the root of all evil in programming.
In his paper on the errors of TeX (no proper ref but it's reprinted in his
book on Literate Programming) he calls it Hoare's dictum (i.e. Tony
Hoare) although the context suggests that this isn't an `official
name'. Dunno if Hoare heard it from someone else though...
___cheers,_dave________________________________________________________
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law: however many computers
email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel:
(0117) 954-5250 | waiting for compilations to finish.
From diatchki@cse.ogi.edu Tue Jan 23 20:16:05 2001
Date: Tue, 23 Jan 2001 12:16:05 -0800
From: Iavor Diatchki diatchki@cse.ogi.edu
Subject: 'any' and 'all' compared with the rest of the Report
hello
i myself am not an experienced Haskell user, so please correct me
if i am wrong. it is difficult in general to reason about the
performance of lazy programs, so i don't think one can assume
much. in particular i dont think 'any' and 'all' will
perform in linear space. here is why i think so:
take an example when 'any' is applied to some list (x:xs)
and someone is actually interested in the result:
any p (x:xs)
1. -> (or . map p) (x:xs)
2. -> or (map p (x:xs))
3. -> foldr (||) False (map p (x:xs))
[at this stage we need to know what what kind of list is (map p (x:xs))
i.e. empty or a cons thing, so we need to do some evaluation]
4. -> foldr (||) False (p x : map p xs)
5. -> p x || (foldr (||) False (map p xs))
[at this stage we need to know what kind of thing is p x, i.e.
True or False, so we need to evaluate p x]
6. -> if p x is True we are done (result True)
7. -> if p x is False the result is (foldr (||) False (map p xs))
and we go back to 3. note that p x has become garbage and so it
doesnt really take up any space, so one really needs only enough
space to process the list one element at a time.
what causes problems is the need to create unnecessary cons cells
(i.e. the evaluation after 3.). this is bad because it takes time.
of course it only adds a constant so the complexity is the same
but in practise programs run slower. this is where i would expect
a good compiler to do some optimisation, i.e to remove the need
for the intermediate list.
i like the idea of programming at a higher level, as i believe it
produces "better structured" programs. what i mean is that one
manages to capture certain aspects of a program, which would
be obscured if one always used explicit recursion. i think
higher order functions like maps and folds really go a long way
toward structuring functional programs. in a way this is simillar
to using loops instead of explicit gotos in procedural programs.
anyways these are my deep thought on the matter :)
bye
iavor
On Tue, Jan 23, 2001 at 12:35:19PM -0600, Eric Shade wrote:
...
>
> Then I get to 'any' and 'all', whose specification requires linear
> time and *linear* space when it should run in constant space. (By the
> way, I checked and GHC does *not* use the Prelude definitions but
> rather the obvious recursive ones, and most of its optimizations based
> on "good consumers/producers" use meta-linguistic rewrite rules. So
> without knowing the specific optimizations that a compiler provides, I
> think it's safe to assume that the Prelude 'any' and 'all' *will*
> require linear space.)
...
--
+---------------------------------+---------------------------------------+
|Iavor S. Diatchki | email: diatchki@cse.ogi.edu |
|Dept. of Computer Science | web: http://www.cse.ogi.edu/~diatchki |
|Oregon Graduate Institute | tel: 5037481631 |
+---------------------------------+---------------------------------------+
From Sven.Panne@informatik.uni-muenchen.de Tue Jan 23 22:10:53 2001
Date: Tue, 23 Jan 2001 23:10:53 +0100
From: Sven Panne Sven.Panne@informatik.uni-muenchen.de
Subject: 'any' and 'all' compared with the rest of the Report
Iavor Diatchki wrote:
> [...] but in practise programs run slower.
If "practise" = "simple interpreter", yes. But...
> this is where i would expect a good compiler to do some optimisation,
> i.e to remove the need for the intermediate list.
<TotallyUnbiasedAd>
Given
or = foldr (||) False
any p = or . map p
ghc -O generates basically the following code (use -ddump-simpl to
see this):
any :: (a -> Bool) -> [a] -> Bool
any p xs = let go ys = case ys of
(z:zs) -> case p z of
False -> go zs
True -> True
in go xs
This is exactly the recursive version, without any intermediate list,
but I hardly think that anybody recognizes this with a quick glance
only.
</ToTallyUnbiasedAd>
> i like the idea of programming at a higher level, as i believe it
> produces "better structured" programs. what i mean is that one
> manages to capture certain aspects of a program, which would
> be obscured if one always used explicit recursion. i think
> higher order functions like maps and folds really go a long way
> toward structuring functional programs. in a way this is simillar
> to using loops instead of explicit gotos in procedural programs.
> anyways these are my deep thought on the matter :)
IMHO you should write any kind of recursion over a given data structure
at most once (in a higher order function). (Constructor) classes and
generics are a further step in this direction. It vastly improves
readability (after you get used to it :-) and often there is no
performance
hit at all.
Cheers,
Sven
From uk1o@rz.uni-karlsruhe.de Tue Jan 23 22:25:16 2001
Date: Tue, 23 Jan 2001 23:25:16 +0100
From: Hannah Schroeter uk1o@rz.uni-karlsruhe.de
Subject: 'any' and 'all' compared with the rest of the Report
Hello!
On Tue, Jan 23, 2001 at 11:10:53PM +0100, Sven Panne wrote:
> [...]
> <TotallyUnbiasedAd>
> Given
>
> or = foldr (||) False
> any p = or . map p
> ghc -O generates basically the following code (use -ddump-simpl to
> see this):
> any :: (a -> Bool) -> [a] -> Bool
> any p xs = let go ys = case ys of
> (z:zs) -> case p z of
> False -> go zs
> True -> True
> in go xs
Mental note: I should really upgrade GHC. I'm however a bit afraid about
ghc-current, as I'm on a non-ELF arch.
> [...]
Kind regards,
Hannah.
From Sven.Panne@informatik.uni-muenchen.de Tue Jan 23 22:45:12 2001
Date: Tue, 23 Jan 2001 23:45:12 +0100
From: Sven Panne Sven.Panne@informatik.uni-muenchen.de
Subject: 'any' and 'all' compared with the rest of the Report
I wrote:
> [...]
> ghc -O generates basically the following code (use -ddump-simpl to
> see this):
>
> any :: (a -> Bool) -> [a] -> Bool
> any p xs = let go ys = case ys of
Ooops, cut'n'paste error: Insert
[] -> False
here. :-}
> (z:zs) -> case p z of
> False -> go zs
> True -> True
> in go xs
> [...]
Cheers,
Sven
From jmaessen@mit.edu Wed Jan 24 19:25:12 2001
Date: Wed, 24 Jan 2001 14:25:12 -0500
From: Jan-Willem Maessen jmaessen@mit.edu
Subject: 'any' and 'all' compared with the rest of the Report
(Note, I've kept this on Haskell-cafe)
Bjorn Lisper wrote (several messages back):
> ... What I in turn would like to add is that specifications like
>
> any p = or . map p
>
> are on a higher level of abstraction
...
> The first specification is, for instance, directly data parallel
> which facilitates an implementation on a parallel machine or in hardware.
As was pointed out in later discussion, the specification of "or" uses
foldr and is thus not amenable to data parallelism.
On which subject Jerzy Karczmarczuk <karczma@info.unicaen.fr> later noted:
> On the other hand, having an even more generic logarithmic iterator
> for associative operations seems to me a decent idea.
In my master's thesis, "Eliminating Intermediate Lists in pH using
Local Transformations", I proposed doing just that, and called the
resulting operator "reduce" in line with Bird's list calculus. I then
gave rules for performing deforestation so that sublists could be
traversed in parallel. The compiler can choose from many possible
definitions of reduce, including:
reduce = foldr
reduce f z = foldl (flip f) z
reduce = divideAndConquer
(most of the complexity of the analysis focuses on making this choice
intelligently; the rest could be re-stated RULES-style just as the GHC
folks have done with foldr/build.) We've been using this stuff for
around 6-7 years now.
Fundamentally, many associative operators have a "handedness" which we
disregard at our peril. For example, defining "or" in terms of foldl
as above is very nearly correct until handed an infinite list, but it
performs really badly regardless of the input. Thus, we need a richer
set of primitives (and a lot more complexity) if we really want to
capture "what is most commonly a good idea" as opposed to "what is
safe". For example, our deforestation pass handles "concat" as part
of its internal representation, since no explicit definition using
reduction is satisfactory:
concat = foldr (++) [] -- least work, no parallelism
concat = reduce (++) [] -- parallelizes well, but could pessimize
-- the program
For this reason, it's hard to use "reduce" directly in more than a
handful of prelude functions. We use reduce a lot, but only after
uglification of the prelude definitions to convince the compiler to
"guess right" about the appropriate handedness.
There is also the problem of error behavior; if we unfold an "or"
eagerly to exploit its data parallelism we may uncover errors, even if
it's finite:
> or [True, error "This should never be evaluated"]
True
My current work includes [among other things] ways to eliminate this
problem---that is, we may do a computation eagerly and defer or
discard any errors.
-Jan-Willem Maessen
My Master's Thesis is at:
ftp://csg-ftp.lcs.mit.edu/pub/papers/theses/maessen-sm.ps.gz
A shorter version (with some refinements) is found in:
ftp://csg-ftp.lcs.mit.edu/pub/papers/csgmemo/memo-370.ps.gz
From lisper@it.kth.se Thu Jan 25 09:40:49 2001
Date: Thu, 25 Jan 2001 10:40:49 +0100 (MET)
From: Bjorn Lisper lisper@it.kth.se
Subject: 'any' and 'all' compared with the rest of the Report
>There is also the problem of error behavior; if we unfold an "or"
>eagerly to exploit its data parallelism we may uncover errors, even if
>it's finite:
>> or [True, error "This should never be evaluated"]
>True
>My current work includes [among other things] ways to eliminate this
>problem---that is, we may do a computation eagerly and defer or
>discard any errors.
What you basically have to do is to treat purely data-dependent errors (like
division by zero, or indexing an array out of bounds) as values rather than
events. Then you can decide whether to raise the error or discard it,
depending on whether the error value turned out to be needed or not.
You will have to extend the operators of the language to deal also with
error values. Basically, the error values should have algebraic properties
similar to bottom (so strict functions return error given an error as
argument). Beware that some decisions have to be taken regarding how error
values should interact with bottom. (For instance, should we have
error + bottom = error or error + bottom = bottom?) The choice affects which
evaluation strategies will be possible.
Björn Lisper
From jmaessen@mit.edu Thu Jan 25 16:09:07 2001
Date: Thu, 25 Jan 2001 11:09:07 -0500
From: Jan-Willem Maessen jmaessen@mit.edu
Subject: 'any' and 'all' compared with the rest of the Report
Bjorn Lisper <lisper@it.kth.se> replies to my reply:
> >My current work includes [among other things] ways to eliminate this
> >problem---that is, we may do a computation eagerly and defer or
> >discard any errors.
>
> What you basically have to do is to treat purely data-dependent errors (like
> division by zero, or indexing an array out of bounds) as values rather than
> events.
Indeed. We can have a class of deferred exception values similar to
IEEE NaNs.
[later]:
> Beware that some decisions have to be taken regarding how error
> values should interact with bottom. (For instance, should we have
> error + bottom = error or error + bottom = bottom?) The choice affects which
> evaluation strategies will be possible.
Actually, as far as I can tell we have absolute freedom in this
respect. What happens when you run the following little program?
\begin{code}
forever x = forever x
bottomInt :: Int
bottomInt = error "Evaluating bottom is naughty" + forever ()
main = print bottomInt
\end{code}
I don't know of anything in the Haskell language spec that forces us
to choose whether to signal the error or diverge in this case (though
it's clear we must do one or the other). Putting strong constraints
on evaluation order would cripple a lot of the worker/wrapper-style
optimizations that (eg) GHC users depend on for fast code. We want
the freedom to demand strict arguments as early as possible; the
consequence is we treat all bottoms equally, even if they exhibit
different behavior in practice. This simplification is a price of
"clean equational semantics", and one I'm more than willing to pay.
-Jan-Willem Maessen
jmaessen@mit.edu
[PS - I've deliberately dodged the issue of program-level exception
handling mechanisms and the like.]
From marko@ki.informatik.uni-frankfurt.de Fri Jan 26 12:59:02 2001
Date: Fri, 26 Jan 2001 13:59:02 +0100
From: Marko Schuetz marko@ki.informatik.uni-frankfurt.de
Subject: 'any' and 'all' compared with the rest of the Report
From: Jan-Willem Maessen <jmaessen@mit.edu>
Subject: Re: 'any' and 'all' compared with the rest of the Report
Date: Thu, 25 Jan 2001 11:09:07 -0500
> Bjorn Lisper <lisper@it.kth.se> replies to my reply:
> > >My current work includes [among other things] ways to eliminate this
> > >problem---that is, we may do a computation eagerly and defer or
> > >discard any errors.
> >
> > What you basically have to do is to treat purely data-dependent errors (like
> > division by zero, or indexing an array out of bounds) as values rather than
> > events.
>
> Indeed. We can have a class of deferred exception values similar to
> IEEE NaNs.
>
> [later]:
> > Beware that some decisions have to be taken regarding how error
> > values should interact with bottom. (For instance, should we have
> > error + bottom = error or error + bottom = bottom?) The choice affects which
> > evaluation strategies will be possible.
>
> Actually, as far as I can tell we have absolute freedom in this
> respect. What happens when you run the following little program?
I don't think we have absolute freedom. Assuming we want
\forall s : bottom \le s
including s = error, then we should also have error \not\le
bottom. For all other values s \not\equiv bottom we would want error
\le s.
. . . . . . . . .
\ /
\ /
.
.
.
|
error
|
bottom
Now if f is a strict function returning a strict function then
(f bottom) error \equiv bottom error \equiv bottom
and due to f's strictness either f error \equiv bottom or f error
\equiv error. The former is as above. For the latter (assuming
monotonicity) we have error \le 1 \implies f error \le f 1 and thus
(f error) bottom \le (f 1) bottom \equiv bottom
On the other hand, if error and other data values are incomparable.
. . . . . error
\ /
\ /
.
.
|
bottom
and you want, say, error + bottom \equiv error then + can no longer be
strict in its second argument....
So I'd say error + bottom \equiv bottom and bottom + error \equiv
bottom.
>
> \begin{code}
> forever x = forever x
>
> bottomInt :: Int
> bottomInt = error "Evaluating bottom is naughty" + forever ()
>
> main = print bottomInt
> \end{code}
>
> I don't know of anything in the Haskell language spec that forces us
> to choose whether to signal the error or diverge in this case (though
> it's clear we must do one or the other). Putting strong constraints
> on evaluation order would cripple a lot of the worker/wrapper-style
> optimizations that (eg) GHC users depend on for fast code. We want
> the freedom to demand strict arguments as early as possible; the
> consequence is we treat all bottoms equally, even if they exhibit
> different behavior in practice. This simplification is a price of
> "clean equational semantics", and one I'm more than willing to pay.
If error \equiv bottom and you extend, say, Int with NaNs, how do you
implement arithmetic such that Infinity + Infinity \equiv Infinity and
Infinity/Infinity \equiv Invalid Operation?
Marko
From jmaessen@mit.edu Fri Jan 26 17:26:17 2001
Date: Fri, 26 Jan 2001 12:26:17 -0500
From: Jan-Willem Maessen jmaessen@mit.edu
Subject: 'any' and 'all' compared with the rest of the Report
Marko Schuetz <MarkoSchuetz@web.de> starts an explanation of bottom
vs. error with an assumption which I think is dangerous:
> Assuming we want
>
> \forall s : bottom \le s
>
> including s = error, then we should also have error \not\le
> bottom.
He uses this, and an argument based on currying, to show that strict
functions ought to force their arguments left to right.
This seems to me to be another instance of an age-old debate in
programming language design/semantics: If we can in practice observe
evaluation order, should we therefore specify that evaluation order?
This is a debate that's raged for quite a while among users of
imperative languages. Some languages (Scheme comes to mind) very
clearly state that this behavior is left unspecified.
Fortunately, for the Haskell programmer the debate is considerably
simpler, as only the behavior of "wrong" programs is affected. I am
more than willing to be a little ambiguous about the error behavior of
programs, by considering "bottom" and "error" to be one and the same.
As I noted in my last message, this allows some optimizations which
would otherwise not be allowed. Here's the worker-wrapper
optimization at work; I'll use explicit unboxing to make the
evaluation clear.
forever x = forever x -- throughout.
addToForever :: Int -> Int
addToForever b = forever () + b
main = print (addToForever (error "Bottom is naughty!"))
==
-- expanding the definition of +
addToForever b =
case forever () of
I# a# ->
case b of
I# b# -> a# +# b#
==
-- At this point strictness analysis reveals that addToForever
-- is strict in its argument. As a result, we perform the worker-
-- wrapper transformation:
addToForever b =
case b of
I# b# -> addToForever_worker b#
addToForever_worker b# =
let b = I# b#
in case forever () of
I# a# ->
case b of
I# b# -> a# +# b#
==
-- The semantics have changed---b will now be evaluated before
-- forever().
I've experimented with versions of Haskell where order of evaluation
did matter. It was a giant albatross around our neck---there is
simply no way to cleanly optimize programs in such a setting without
doing things like termination analysis to justify the nice old
transformations. If you rely on precise results from an analysis to
enable transformation, you all-too-frequently miss obvious
opportunities. For this reason I *very explicitly* chose not to make
such distinctions in my work.
In general, making too many semantic distinctions weakens the power of
algebraic semantics. Two levels---bottom and everything else---seems
about the limit of acceptable complexity. If you look at the work on
free theorems you quickly discover that even having bottom in the
language makes life a good deal more difficult, and really we'd like
to have completely flat domains.
I'd go as far as saying that it also gives us some prayer of
explaining our algebraic semantics to the programmer. A complex
algebra becomes too bulky to reason about when things act stragely.
Bottom is making things hard enough.
-Jan-Willem Maessen
PS - Again, we don't try to recover from errors. This is where the
comparison with IEEE arithmetic breaks down: NaNs are specifically
designed so you can _test_ for them and take action. I'll also point
out that infinities are _not_ exceptional values; they're semantically
"at the same level" as regular floats---so the following comparison is
a bit disingenuous:
> If error \equiv bottom and you extend, say, Int with NaNs, how do you
> implement arithmetic such that Infinity + Infinity \equiv Infinity and
> Infinity/Infinity \equiv Invalid Operation?
From marko@ki.informatik.uni-frankfurt.de Fri Jan 26 19:00:51 2001
Date: Fri, 26 Jan 2001 20:00:51 +0100
From: Marko Schuetz marko@ki.informatik.uni-frankfurt.de
Subject: 'any' and 'all' compared with the rest of the Report
From: Jan-Willem Maessen <jmaessen@mit.edu>
Subject: Re: 'any' and 'all' compared with the rest of the Report
Date: Fri, 26 Jan 2001 12:26:17 -0500
> Marko Schuetz <MarkoSchuetz@web.de> starts an explanation of bottom
> vs. error with an assumption which I think is dangerous:
> > Assuming we want
> >
> > \forall s : bottom \le s
> >
> > including s = error, then we should also have error \not\le
> > bottom.
>
> He uses this, and an argument based on currying, to show that strict
> functions ought to force their arguments left to right.
I can't see where I did. I argued that distinguishing between error
and bottom seems to not leave much choice for bottom + error.
[..]
> forever x = forever x -- throughout.
>
> addToForever :: Int -> Int
> addToForever b = forever () + b
>
> main = print (addToForever (error "Bottom is naughty!"))
>
> ==
> -- expanding the definition of +
>
> addToForever b =
> case forever () of
> I# a# ->
> case b of
> I# b# -> a# +# b#
>
> ==
> -- At this point strictness analysis reveals that addToForever
> -- is strict in its argument. As a result, we perform the worker-
> -- wrapper transformation:
>
> addToForever b =
> case b of
> I# b# -> addToForever_worker b#
>
> addToForever_worker b# =
> let b = I# b#
> in case forever () of
> I# a# ->
> case b of
> I# b# -> a# +# b#
>
> ==
> -- The semantics have changed---b will now be evaluated before
> -- forever().
Contextually, the original and the worker/wrapper versions do not
differ. I.e. there is no program into which the two could be inserted
which would detect a difference. So their semantics should be regarded
as equal.
> PS - Again, we don't try to recover from errors. This is where the
> comparison with IEEE arithmetic breaks down: NaNs are specifically
> designed so you can _test_ for them and take action. I'll also point
> out that infinities are _not_ exceptional values; they're semantically
> "at the same level" as regular floats---so the following comparison is
> a bit disingenuous:
> > If error \equiv bottom and you extend, say, Int with NaNs, how do you
> > implement arithmetic such that Infinity + Infinity \equiv Infinity and
> > Infinity/Infinity \equiv Invalid Operation?
Infinity was chosen as an example: from what you described I had the
impression the implementation needed to match on NaN constructors at
some point. Is this not the case?
Marko
From jmaessen@mit.edu Fri Jan 26 20:53:51 2001
Date: Fri, 26 Jan 2001 15:53:51 -0500
From: Jan-Willem Maessen jmaessen@mit.edu
Subject: A clarification...
Marko Schuetz <MarkoSchuetz@web.de> replies to me as follows:
> > He uses this, and an argument based on currying, to show that strict
> > functions ought to force their arguments left to right.
>
> I can't see where I did. I argued that distinguishing between error
> and bottom seems to not leave much choice for bottom + error.
You're right, sorry. I misread the following from your earlier
message:
> So I'd say error + bottom \equiv bottom and bottom + error \equiv
> bottom.
As you noted:
> and you want, say, error + bottom \equiv error then + can no longer be
> strict in its second argument....
I'd managed to turn this around in my head.
Nonetheless, my fundamental argument stands: If we separate "bottom"
and "error", we have a few choices operationally, and I'm not fond of
them:
1) Make "error" a representable value. This appears closest to what
you were describing above:
case ERROR of x -> expr => expr [ERROR/x]
This is tricky for unboxed types (especially Ints; floats and
pointers aren't so hard; note that we need more than one
distinguishable error value in practice if we want to tell the user
something useful about what went wrong).
At this point, by the way, it's not a big leap to flatten the
domain as is done with IEEE NaNs, so that monotonicity wrt errors
is a language-level phenomenon rather than a semantic one and we
can handle exceptions by testing for error values.
2) Weaken the algebraic theory as I discussed in my last message.
3) Reject an operational reading of "case" as forcing evaluation and
continuing and have it "do something special" when it encounters
error:
case ERROR of x -> expr => ERROR glb expr[?/x]
>From the programmer's perspective, I'd argue that it's *better* to
signal signal-able errors whenever possible, rather than deferring
them. If nothing else, a signaled error is easier to diagnose than
nontermination! Thus, I'd LIKE:
error + bottom === error
But I'm willing to acknowledge that I can't get this behavior
consistently, except with some sort of fair parallel execution.
I'm doing something along the lines of (3), but I abandon execution
immediately on seeing error---which is consistent only if
error==bottom:
case ERROR of x -> expr => ERROR
[Indeed, the compiler performs this reduction statically where
possible, as it gets rid of a lot of dead code.]
I defer the signaling of errors only if the expression in question is
being evaluated eagerly; for this there is no "case" construct
involved.
-Jan-Willem Maessen
From qrczak@knm.org.pl Fri Jan 26 20:19:47 2001
Date: 26 Jan 2001 20:19:47 GMT
From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl
Subject: A clarification...
Fri, 26 Jan 2001 15:53:51 -0500, Jan-Willem Maessen <jmaessen@mit.edu> pisze:
> 3) Reject an operational reading of "case" as forcing evaluation and
> continuing and have it "do something special" when it encounters
> error:
> case ERROR of x -> expr => ERROR glb expr[?/x]
The subject of errors vs. bottoms is discussed in
http://research.microsoft.com/~simonpj/papers/imprecise-exceptions.ps.gz
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From jmaessen@mit.edu Fri Jan 26 22:34:12 2001
Date: Fri, 26 Jan 2001 17:34:12 -0500
From: Jan-Willem Maessen jmaessen@mit.edu
Subject: A clarification...
qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) further clarifies my
clarification:
> > case ERROR of x -> expr => ERROR glb expr[?/x]
>
> The subject of errors vs. bottoms is discussed in
> http://research.microsoft.com/~simonpj/papers/imprecise-exceptions.ps.gz
Indeed. I crawled through my stack of electronic papers, and couldn't
find the reference. As I noted in my earlier messages, I was
deliberately dodging the issue of *catching* exceptions, as it really
requires a thorough treatment. :-) The "?" in the above reduction is
their special 0 term. They equate divergence with exceptional
convergence within the functional calculus for the reasons I outlined
[I reconstructed those reasons from memory and my own experience, so
differences are due to my faulty recollection]. Their notion of
refinement justifies the reduction I mentioned in my mail:
> > case ERROR of x -> expr => ERROR
In all, an excellent paper for those who are interested in this topic.
-Jan-Willem Maessen
From fjh@cs.mu.oz.au Sat Jan 27 14:24:58 2001
Date: Sun, 28 Jan 2001 01:24:58 +1100
From: Fergus Henderson fjh@cs.mu.oz.au
Subject: 'any' and 'all' compared with the rest of the Report
On 26-Jan-2001, Marko Schuetz <MarkoSchuetz@web.de> wrote:
> I don't think we have absolute freedom. Assuming we want
>
> \forall s : bottom \le s
>
> including s = error, then we should also have error \not\le
> bottom.
You lost me here. Why should we have error \not\le bottom?
Why not just error \not\lt bottom?
--
Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
From wohlstad@cs.ucdavis.edu Sun Jan 28 10:03:24 2001
Date: Sun, 28 Jan 2001 02:03:24 -0800 (PST)
From: Eric Allen Wohlstadter wohlstad@cs.ucdavis.edu
Subject: HScript
I have tried e-mailing the authors of HScript with my problems but they
seem unresponsive. I was hoping maybe someone out there could help me.
When I tried to run the demos I got an error "Could not find Prelude.hs",
so I changed the hugsPath variable in regedit to point to hugs/lib. Then I
got an error "Could not find Int". So I added hugs/lib/exts. Now I get the
error "Addr.hs (line 23): Unknown primitive reference to addrToInt".
Eric Wohlstadter
From marko@ki.informatik.uni-frankfurt.de Sun Jan 28 12:46:46 2001
Date: Sun, 28 Jan 2001 13:46:46 +0100
From: Marko Schuetz marko@ki.informatik.uni-frankfurt.de
Subject: 'any' and 'all' compared with the rest of the Report
From: Fergus Henderson <fjh@cs.mu.oz.au>
Subject: Re: 'any' and 'all' compared with the rest of the Report
Date: Sun, 28 Jan 2001 01:24:58 +1100
> On 26-Jan-2001, Marko Schuetz <MarkoSchuetz@web.de> wrote:
> > I don't think we have absolute freedom. Assuming we want
> >
> > \forall s : bottom \le s
> >
> > including s = error, then we should also have error \not\le
> > bottom.
>
> You lost me here. Why should we have error \not\le bottom?
> Why not just error \not\lt bottom?
I assumed a semantic distinction between error and bottom was intended
to accurately model the way the implementation would distinguish or defer
the erroneous computation. You are right that without this assumption
error \not\lt bottom would suffice.
Marko
From ashley@semantic.org Tue Jan 30 08:13:41 2001
Date: Tue, 30 Jan 2001 00:13:41 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: O'Haskell OOP Polymorphic Functions
At 2001-01-17 17:03, Lennart Augustsson wrote:
>You seem to want dynamic type tests. This is another feature, and
>sometimes a useful one. But it requires carrying around types at
>runtime.
Yes. I tried to do that myself by adding a field, but it seems it can't
be done.
>You might want to look at existential types; it is a similar feature.
I seem to run into a similar problem:
--
class BaseClass s
data Base = forall a. BaseClass a => Base a
class (BaseClass s) => DerivedClass s
data Derived = forall a. DerivedClass a => Derived a
upcast :: Derived -> Base
upcast (Derived d) = Base d
downcast :: Base -> Maybe Derived
--
How do I define downcast?
--
Ashley Yakeley, Seattle WA
From fjh@cs.mu.oz.au Tue Jan 30 10:37:18 2001
Date: Tue, 30 Jan 2001 21:37:18 +1100
From: Fergus Henderson fjh@cs.mu.oz.au
Subject: O'Haskell OOP Polymorphic Functions
On 30-Jan-2001, Ashley Yakeley <ashley@semantic.org> wrote:
> At 2001-01-17 17:03, Lennart Augustsson wrote:
>
> >You seem to want dynamic type tests.
...
> >You might want to look at existential types; it is a similar feature.
>
> I seem to run into a similar problem:
>
> --
> class BaseClass s
> data Base = forall a. BaseClass a => Base a
>
> class (BaseClass s) => DerivedClass s
> data Derived = forall a. DerivedClass a => Derived a
>
> upcast :: Derived -> Base
> upcast (Derived d) = Base d
>
> downcast :: Base -> Maybe Derived
> --
>
> How do I define downcast?
class BaseClass s where
downcast_to_derived :: s -> Maybe Derived
--
Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
From daan@cs.uu.nl Tue Jan 30 11:24:59 2001
Date: Tue, 30 Jan 2001 12:24:59 +0100
From: Daan Leijen daan@cs.uu.nl
Subject: HScript
Hi Eric,
Unfortunately HaskellScript doesn't work with the latest hugs versions.
You need the version of Hugs that is explicitly provided on the
HaskellScript website. (http://www.cs.uu.nl/~daan/download/hugs98may.exe)
Hope this helps,
Daan.
----- Original Message -----
From: "Eric Allen Wohlstadter" <wohlstad@cs.ucdavis.edu>
To: <haskell-cafe@haskell.org>
Sent: Sunday, January 28, 2001 11:03 AM
Subject: HScript
> I have tried e-mailing the authors of HScript with my problems but they
> seem unresponsive. I was hoping maybe someone out there could help me.
> When I tried to run the demos I got an error "Could not find Prelude.hs",
> so I changed the hugsPath variable in regedit to point to hugs/lib. Then I
> got an error "Could not find Int". So I added hugs/lib/exts. Now I get the
> error "Addr.hs (line 23): Unknown primitive reference to addrToInt".
>
> Eric Wohlstadter
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
From ashley@semantic.org Tue Jan 30 22:16:08 2001
Date: Tue, 30 Jan 2001 14:16:08 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: O'Haskell OOP Polymorphic Functions
At 2001-01-30 02:37, Fergus Henderson wrote:
>class BaseClass s where
> downcast_to_derived :: s -> Maybe Derived
Exactly what I was trying to avoid, since now every base class needs to
know about every derived class. This isn't really a practical way to
build an extensible type hierarchy.
--
Ashley Yakeley, Seattle WA
From qrczak@knm.org.pl Tue Jan 30 22:55:59 2001
Date: 30 Jan 2001 22:55:59 GMT
From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl
Subject: O'Haskell OOP Polymorphic Functions
Tue, 30 Jan 2001 00:13:41 -0800, Ashley Yakeley <ashley@semantic.org> pisze:
> How do I define downcast?
You can use a non-standard module Dynamic present in ghc, hbc and Hugs
(I don't know if it's compatible with O'Haskell).
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From fjh@cs.mu.oz.au Wed Jan 31 03:44:07 2001
Date: Wed, 31 Jan 2001 14:44:07 +1100
From: Fergus Henderson fjh@cs.mu.oz.au
Subject: O'Haskell OOP Polymorphic Functions
On 30-Jan-2001, Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> wrote:
> Tue, 30 Jan 2001 00:13:41 -0800, Ashley Yakeley <ashley@semantic.org> pisze:
>
> > How do I define downcast?
>
> You can use a non-standard module Dynamic present in ghc, hbc and Hugs
> (I don't know if it's compatible with O'Haskell).
That lets you downcast to specific ground types, but it doesn't
let you downcast to a type class constrained type variable.
--
Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
From fjh@cs.mu.oz.au Wed Jan 31 03:52:20 2001
Date: Wed, 31 Jan 2001 14:52:20 +1100
From: Fergus Henderson fjh@cs.mu.oz.au
Subject: O'Haskell OOP Polymorphic Functions
On 30-Jan-2001, Ashley Yakeley <ashley@semantic.org> wrote:
> At 2001-01-30 02:37, Fergus Henderson wrote:
>
> >class BaseClass s where
> > downcast_to_derived :: s -> Maybe Derived
>
> Exactly what I was trying to avoid, since now every base class needs to
> know about every derived class. This isn't really a practical way to
> build an extensible type hierarchy.
Right.
I don't know of any way to do that in Hugs/ghc without the problem that
you mention. Really it needs language support, I think.
(I have no idea if you can do it in O'Haskell.)
Note that there are some nasty semantic interactions with dynamic loading,
which is another feature that it would be nice to support. I think it's
possible to add dynamic loading to Haskell 98 without compromising the
semantics, but if you support dynamic type class casts, or overlapping
instance declarations, then dynamically loading a new module could
change the semantics of existing code, rather than just adding new code.
--
Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
From ashley@semantic.org Wed Jan 31 05:47:36 2001
Date: Tue, 30 Jan 2001 21:47:36 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: Type Pattern-Matching for Existential Types
At 2001-01-30 19:52, Fergus Henderson wrote:
>On 30-Jan-2001, Ashley Yakeley <ashley@semantic.org> wrote:
>> At 2001-01-30 02:37, Fergus Henderson wrote:
>>
>> >class BaseClass s where
>> > downcast_to_derived :: s -> Maybe Derived
>>
>> Exactly what I was trying to avoid, since now every base class needs to
>> know about every derived class. This isn't really a practical way to
>> build an extensible type hierarchy.
>
>Right.
>
>I don't know of any way to do that in Hugs/ghc without the problem that
>you mention. Really it needs language support, I think.
>(I have no idea if you can do it in O'Haskell.)
It can't be done in O'Haskell either...
Given that we have existential types, it would be nice to have a
pattern-matching mechanism to get at the inside value. Something like...
--
data Any = forall a. Any a
get :: Any -> Maybe Char
get (Any (c::Char)) = Just c -- bad
get _ = Nothing
--
...but as it stands, this is not legal Haskell, according to Hugs:
ERROR "test.hs" (line 4): Type error in application
*** Expression : Any c
*** Term : c
*** Type : Char
*** Does not match : _0
*** Because : cannot instantiate Skolem constant
This, of course, is because the '::' syntax is for static typing. It
can't be used as a dynamic pattern-test.
Question: how big of a change would it be to add this kind of pattern
matching? Is this a small issue, or does it have large and horrible
implications?
--
Ashley Yakeley, Seattle WA
From lennart@mail.augustsson.net Wed Jan 31 06:16:30 2001
Date: Wed, 31 Jan 2001 01:16:30 -0500
From: Lennart Augustsson lennart@mail.augustsson.net
Subject: Type Pattern-Matching for Existential Types
Ashley Yakeley wrote:
> data Any = forall a. Any a
>
> get :: Any -> Maybe Char
> get (Any (c::Char)) = Just c -- bad
> get _ = Nothing
> --
>
> ...but as it stands, this is not legal Haskell, according to Hugs:
>
> ERROR "test.hs" (line 4): Type error in application
> *** Expression : Any c
> *** Term : c
> *** Type : Char
> *** Does not match : _0
> *** Because : cannot instantiate Skolem constant
>
> This, of course, is because the '::' syntax is for static typing. It
> can't be used as a dynamic pattern-test.
>
> Question: how big of a change would it be to add this kind of pattern
> matching? Is this a small issue, or does it have large and horrible
> implications?
It has large and horrible implications. To do dynamic type tests you need
to carry around the types at runtime. This is not something that Haskell
does (at least you don't have to).
-- Lennart
From ashley@semantic.org Wed Jan 31 06:30:17 2001
Date: Tue, 30 Jan 2001 22:30:17 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: Type Pattern-Matching for Existential Types
At 2001-01-30 22:16, Lennart Augustsson wrote:
>It has large and horrible implications. To do dynamic type tests you need
>to carry around the types at runtime. This is not something that Haskell
>does (at least you don't have to).
Hmm. In this:
--
data Any = forall a. Any a
a1 = Any 3
a2 = Any 'p'
--
...are you saying that a1 and a2 do not have to carry types at runtime?
--
Ashley Yakeley, Seattle WA
From lennart@mail.augustsson.net Wed Jan 31 06:34:58 2001
Date: Wed, 31 Jan 2001 01:34:58 -0500
From: Lennart Augustsson lennart@mail.augustsson.net
Subject: Type Pattern-Matching for Existential Types
Ashley Yakeley wrote:
> At 2001-01-30 22:16, Lennart Augustsson wrote:
>
> >It has large and horrible implications. To do dynamic type tests you need
> >to carry around the types at runtime. This is not something that Haskell
> >does (at least you don't have to).
>
> Hmm. In this:
>
> --
> data Any = forall a. Any a
>
> a1 = Any 3
> a2 = Any 'p'
> --
>
> ...are you saying that a1 and a2 do not have to carry types at runtime?
That's right.
Your data type is actually degenerate. There is nothing you can do what
so ever with a value of type Any (except pass it around).
Slightly more interesting might be
data Foo = forall a . Foo a (a -> Int)
Now you can at least apply the function to the value after pattern matching.
You don't have to carry any types around, because the type system ensures
that you don't misuse the value.
-- Lennart
From fjh@cs.mu.oz.au Wed Jan 31 07:11:13 2001
Date: Wed, 31 Jan 2001 18:11:13 +1100
From: Fergus Henderson fjh@cs.mu.oz.au
Subject: Type Pattern-Matching for Existential Types
On 31-Jan-2001, Lennart Augustsson <lennart@mail.augustsson.net> wrote:
> Ashley Yakeley wrote:
>
> > data Any = forall a. Any a
> >
> > get :: Any -> Maybe Char
> > get (Any (c::Char)) = Just c -- bad
> > get _ = Nothing
> > --
> >
> > ...but as it stands, this is not legal Haskell, according to Hugs:
> >
> > ERROR "test.hs" (line 4): Type error in application
> > *** Expression : Any c
> > *** Term : c
> > *** Type : Char
> > *** Does not match : _0
> > *** Because : cannot instantiate Skolem constant
> >
> > This, of course, is because the '::' syntax is for static typing. It
> > can't be used as a dynamic pattern-test.
> >
> > Question: how big of a change would it be to add this kind of pattern
> > matching? Is this a small issue, or does it have large and horrible
> > implications?
>
> It has large and horrible implications. To do dynamic type tests you need
> to carry around the types at runtime. This is not something that Haskell
> does (at least you don't have to).
But you can achieve a similar effect to the example above using the
Hugs/ghc `Dynamic' type. Values of type Dynamic do carry around the
type of encapsulated value.
data Any = forall a. typeable a => Any a
get :: Any -> Maybe Char
get (Any x) = fromDynamic (toDyn x)
This works as expected:
Main> get (Any 'c')
Just 'c'
Main> get (Any "c")
Nothing
Main> get (Any 42)
ERROR: Unresolved overloading
*** Type : (Typeable a, Num a) => Maybe Char
*** Expression : get (Any 42)
Main> get (Any (42 :: Int))
Nothing
--
Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
From nordland@cse.ogi.edu Wed Jan 31 07:11:19 2001
Date: Tue, 30 Jan 2001 23:11:19 -0800
From: Johan Nordlander nordland@cse.ogi.edu
Subject: Type Pattern-Matching for Existential Types
Lennart Augustsson wrote:
>
> Ashley Yakeley wrote:
>
> > data Any = forall a. Any a
> >
> > get :: Any -> Maybe Char
> > get (Any (c::Char)) = Just c -- bad
> > get _ = Nothing
> > --
> >
> > ...but as it stands, this is not legal Haskell, according to Hugs:
> >
> > ERROR "test.hs" (line 4): Type error in application
> > *** Expression : Any c
> > *** Term : c
> > *** Type : Char
> > *** Does not match : _0
> > *** Because : cannot instantiate Skolem constant
> >
> > This, of course, is because the '::' syntax is for static typing. It
> > can't be used as a dynamic pattern-test.
> >
> > Question: how big of a change would it be to add this kind of pattern
> > matching? Is this a small issue, or does it have large and horrible
> > implications?
>
> It has large and horrible implications. To do dynamic type tests you need
> to carry around the types at runtime. This is not something that Haskell
> does (at least you don't have to).
>
> -- Lennart
It can also be questioned from a software engineering standpoint. Much
of the purpose with existential types is to provide information hiding;
that is, the user of an existentially quantified type is not supposed to
know its concrete representation. The merits of an information hiding
dicipline is probably no news to anybody on this list.
However, this whole idea gets forfeited if it's possible to look behind
the abstraction barrier by pattern-matching on the representation.
Allowing that is a little like saying "this document is secret, but if
you're able to guess its contents, I'll gladly confirm it to you!". The
same argument also applies to information hiding achieved by coercing a
record to a supertype.
This doesn't mean that I can't see the benefit of dynamic type checking
for certain problems. But it should be brought in mind that such a
feature is a separate issue, not to be confused with existential types
or subtyping. And as Lennart says, it's a feature with large (and
horrible!) implications to the implementation of a language.
-- Johan
From ashley@semantic.org Wed Jan 31 07:31:36 2001
Date: Tue, 30 Jan 2001 23:31:36 -0800
From: Ashley Yakeley ashley@semantic.org
Subject: Type Pattern-Matching for Existential Types
At 2001-01-30 23:11, Johan Nordlander wrote:
>However, this whole idea gets forfeited if it's possible to look behind
>the abstraction barrier by pattern-matching on the representation.
Isn't this information-hiding more appropriately achieved by hiding the
constructor?
--
data IntOrChar = MkInt Int | MkChar Char
data Any = forall a. MkAny a
--
Surely simply hiding MkInt, MkChar and MkAny prevents peeking?
--
Ashley Yakeley, Seattle WA
From nordland@cse.ogi.edu Wed Jan 31 08:35:55 2001
Date: Wed, 31 Jan 2001 00:35:55 -0800
From: Johan Nordlander nordland@cse.ogi.edu
Subject: Type Pattern-Matching for Existential Types
Ashley Yakeley wrote:
>
> At 2001-01-30 23:11, Johan Nordlander wrote:
>
> >However, this whole idea gets forfeited if it's possible to look behind
> >the abstraction barrier by pattern-matching on the representation.
>
> Isn't this information-hiding more appropriately achieved by hiding the
> constructor?
>
> --
> data IntOrChar = MkInt Int | MkChar Char
> data Any = forall a. MkAny a
> --
>
> Surely simply hiding MkInt, MkChar and MkAny prevents peeking?
This is the simple way of obtaining an abstract datatype that can have
only one static implementation, and as such it can indeed be understood
in terms of existential types. But if you want to define an abstract
datatype that will allow several different implementations to be around
at run-time, you'll need the full support of existential types in the language.
But might also want to consider the analogy between your example, and a
system which carries type information around at runtime. Indeed,
labelled sums is a natural way of achieving a universe of values tagged
with their type. The difference to real dynamic typing is of course
that for labelled sums the set of possible choices is always closed,
which is also what makes their implementation relatively simple (this
still holds in O'Haskell, by the way, despite subtyping).
Nevertheless, (by continuing the analogy with your proposal for type
pattern-matching) you can think of type matching as as a system where
the constructors can't be completely hidden, just become deassociated
with any particular "existentially quantified" type. That is, MkInt and
MkChar would still be allowed outside the scope of the abstract type
IntOrChar, they just wouldn't be seen as constructors specifically
associated with that type. Clearly that would severely limit the
usefulness of the type abstraction feature.
-- Johan
From fjh@cs.mu.oz.au Wed Jan 31 08:48:45 2001
Date: Wed, 31 Jan 2001 19:48:45 +1100
From: Fergus Henderson fjh@cs.mu.oz.au
Subject: Type Pattern-Matching for Existential Types
On 30-Jan-2001, Johan Nordlander <nordland@cse.ogi.edu> wrote:
> It can also be questioned from a software engineering standpoint. Much
> of the purpose with existential types is to provide information hiding;
> that is, the user of an existentially quantified type is not supposed to
> know its concrete representation. The merits of an information hiding
> dicipline is probably no news to anybody on this list.
>
> However, this whole idea gets forfeited if it's possible to look behind
> the abstraction barrier by pattern-matching on the representation.
That's a good argument for dynamic type casts not being available by
default. However, there are certainly times when the designer of an
interface wants some degree of abstraction, but does not want to
prohibit dynamic type class casts. The language should permit the
designer to express that intention, e.g. using the `Typeable' type
class constraint.
> Allowing that is a little like saying "this document is secret, but if
> you're able to guess its contents, I'll gladly confirm it to you!".
Yes. But there are times when something like that is indeed what you
want to say. The language should permit you to say it.
The specific language that you have chosen has some connotations
which imply that this would be undesirable. But saying "this data
structure is abstract, but you are permitted to downcast it" is
really not a bad thing to say. There's different levels of secrecy,
and not everything needs to be completely secret; for some things it's
much better to allow downcasting, so long as you are explicit about it.
> This doesn't mean that I can't see the benefit of dynamic type checking
> for certain problems. But it should be brought in mind that such a
> feature is a separate issue, not to be confused with existential types
> or subtyping.
OK, perhaps we are in agreement after all.
--
Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
From C.Reinke@ukc.ac.uk Wed Jan 31 13:19:16 2001
Date: Wed, 31 Jan 2001 13:19:16 +0000
From: C.Reinke C.Reinke@ukc.ac.uk
Subject: Type Pattern-Matching for Existential Types
> > > data Any = forall a. Any a
> > >
> > > get :: Any -> Maybe Char
> > > get (Any (c::Char)) = Just c -- bad
> > > get _ = Nothing
..
> It can also be questioned from a software engineering standpoint. Much
> of the purpose with existential types is to provide information hiding;
> that is, the user of an existentially quantified type is not supposed to
> know its concrete representation. The merits of an information hiding
> dicipline is probably no news to anybody on this list.
This discussion reminds me of an old paper by MacQueen:
David MacQueen. Using dependent types to express modular structure.
In Proc. 13th ACM SIGPLAN-SIGACT Symposium on Principles of
Programming Languages, pages 277--286, January 1986.
He discusses some of the disadvantages of using plain existential
quantification for modular programming purposes and proposes an
alternative based on dependent sums, represented as pairing the witness
type with the expression in which it is used, instead of existentials,
where there is no witness for the existentially quantified type.
See CiteSeer for some of the follow-on work that references MacQueen's
paper:
http://citeseer.nj.nec.com/context/32982/0
Some of that work tried to find a balance between having no witness
types and carrying witness types around at runtime. In this context,
Claudio Russo's work might be of interest, as he proposes to avoid the
problems of existentials without going into (value-)dependent types:
http://www.dcs.ed.ac.uk/home/cvr/
Claus
From Tom.Pledger@peace.com Wed Jan 31 21:31:56 2001
Date: Thu, 1 Feb 2001 10:31:56 +1300
From: Tom Pledger Tom.Pledger@peace.com
Subject: Type Pattern-Matching for Existential Types
Lennart Augustsson writes:
[...]
> Slightly more interesting might be
> data Foo = forall a . Foo a (a -> Int)
>
> Now you can at least apply the function to the value after pattern
> matching. You don't have to carry any types around, because the
> type system ensures that you don't misuse the value.
Hi.
In that particular example, I'd be more inclined to use the
existential properties of lazy evaluation:
packFoo x f = Foo x f
-- The actual type of x is buried in the existential
data Bar = Bar Int
packBar x f = Bar (f x)
-- The actual type of x is buried in the (f x) suspension
Of course, when there are more usable things which can be retrieved
from the type, an explicit existential is more appealing:
data Foo' = forall a . Ord a => Foo' [a]
packFoo' xs = Foo' xs
-- The actual type of xs is buried in the existential
data Bar' = Bar' [[Ordering]]
packBar' xs = Bar' oss where
oss = [[compare x1 x2 | x2 <- xs] | x1 <- xs]
-- The actual type of xs is buried in the oss suspension, which
-- is rather bloated
Regards,
Tom
From wohlstad@cs.ucdavis.edu Thu Jan 4 03:49:16 2001
From: wohlstad@cs.ucdavis.edu (Eric Allen Wohlstadter)
Date: Wed, 3 Jan 2001 19:49:16 -0800 (PST)
Subject: Problems compiling HDirect 0.17
Message-ID:
When I try to compile hdirect on cygwin on my NT I get a lot of complaints
from ghc about bad tokens. Particularly the use of "\" to continue a
line. For some reason it doesn't see it as the line continuation
character.
Also, it compiled ok on a different machine but it complains about a
missing link to CYGWIN1.DLL when I try to do anything.
Any help would be appreciated.
Eric Wohlstadter
From fjh@cs.mu.oz.au Thu Jan 4 03:59:56 2001
From: fjh@cs.mu.oz.au (Fergus Henderson)
Date: Thu, 4 Jan 2001 14:59:56 +1100
Subject: Problems compiling HDirect 0.17
In-Reply-To:
References:
Message-ID: <20010104145955.A26163@hg.cs.mu.oz.au>
On 03-Jan-2001, Eric Allen Wohlstadter wrote:
> When I try to compile hdirect on cygwin on my NT I get a lot of complaints
> from ghc about bad tokens. Particularly the use of "\" to continue a
> line. For some reason it doesn't see it as the line continuation
> character.
You need to make sure that the files are mounted as binary
rather than as text, or vice versa. Try the cygwin `mount' command.
--
Fergus Henderson | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: | -- the last words of T. S. Garp.
From russell@brainlink.com Thu Jan 4 04:11:38 2001
From: russell@brainlink.com (Benjamin L. Russell)
Date: Wed, 03 Jan 2001 23:11:38 -0500
Subject: Learning Haskell and FP
In-Reply-To: <20010103162602.86980FCE@www.haskell.org>
Message-ID:
On Wed, 03 Jan 2001 11:26:53 -0500
Michael Zawrotny wrote:
>
> [snip]
>
> The reason that I found GITH difficult wasn't that the
> concept
> of programming with functions/functional style was new to
> me. What got me was that the concepts and notations were
> much
> more "mathematical" than "programmatic". In my
> explorations
> of various languages, my experience with introductions to
> scheme/CL has mostly been that they tend to show how to
> do
> things that are familiar to imperative programmers, plus
> all
> of the things that you can do with functions as first
> class
> values. With intros to SML, OCaml and Haskell, there is
> a
> much greater emphasis on types, type systems, and
> provable
> program correctness.
>
> [snip]
>
> The thing that I would most like to see would entitled "A
> Practical Guide to Haskell" or something of that nature.
>
> [snip]
>
> One is tempted to come to the conclusion that Haskell is
> not
> suited for "normal" programmers writing "normal"
> programs.
How would you define a "'normal' programmer writing 'normal' programs?" What exactly is a "'normal' program?"
(Perhaps another way of phrasing the issue is as the "declarative" vs. "procedural" distinction, since the issue seems to be that of "what is" (types) vs. "how to" (imperative expression; i.e., procedures).)
While I agree that "A Practical Guide to Haskell" would indeed be a suitable alternative for programmers from the procedural school of expression, I would caution that such an introduction would probably not be suitable for all.
If I may give my own case as an example, I studied both C and Scheme (in addition to auditing a course in Haskell) in college, and favored Scheme over C precisely because of my Scheme course's emphasis on provable program correctness. This is largely a matter of background and taste: my course background was relatively focused on the design and analysis of algorithms, with provable program correctness being a related topic.
Perhaps, ideally, two separate tutorials (or perhaps a single tutorial with two sections based on different viewpoints?) may be needed? The difficulty is that the conceptual distance between the declarative and procedural schools of thought seems too great to be bridged by a single viewpoint. It seems that any introduction favoring either one would risk alienating the other.
Personally, I would really prefer "A Gentle Elementary Introduction to Haskell: Elements of the Haskell School of Expression with Practical Examples," but some would no doubt choose "Haskell in a Nutshell: How to Write Practical Programs in Haskell."
--Ben
--
Benjamin L. Russell
russell@brainlink.com
benjamin.russell.es.94@aya.yale.edu
"Furuike ya! Kawazu tobikomu mizu no oto." --Matsuo Basho
From fruehr@willamette.edu Thu Jan 4 06:55:17 2001
From: fruehr@willamette.edu (Fritz K Ruehr)
Date: Wed, 3 Jan 2001 22:55:17 -0800 (PST)
Subject: Learning Haskell and FP
Message-ID: <200101040655.WAA10861@gemini.willamette.edu>
Benjamin Russell (russell@brainlink.com) wrote:
> Personally, I would really prefer "A Gentle Elementary
> Introduction to Haskell: Elements of the Haskell School of
> Expression with Practical Examples," but some would no doubt
> choose "Haskell in a Nutshell: How to Write Practical Programs
> in Haskell."
An O'Reilly "nutshell" book is an even better suggestion than
my "Design Patterns in Haskell" of a few days back, at least
from the perspective of marketing and promotion.
But it raises the issue of an appropriate animal mascot for
the cover; I can only come up with the Uakari, an exotic-looking
rainforest monkey, which sounds similar to "Curry".
(look here for a picture:)
One possibly relevant point: the site notes that the Uakari is
"mainly arboreal (tree-dwelling)". On the other hand, this means
that it is probably threatened by deforestation, whereas this
phenomenon can be of great help in Haskell :) .
-- Fritz Ruehr
fruehr@willamette.edu
From russell@brainlink.com Fri Jan 5 00:04:10 2001
From: russell@brainlink.com (Benjamin L. Russell)
Date: Thu, 04 Jan 2001 19:04:10 -0500
Subject: Learning Haskell and FP
In-Reply-To: <200101040655.WAA10861@gemini.willamette.edu>
Message-ID:
On Wed, 3 Jan 2001 22:55:17 -0800 (PST)
Fritz K Ruehr wrote:
>
> [snip]
>
> An O'Reilly "nutshell" book is an even better suggestion
> than
> my "Design Patterns in Haskell" of a few days back, at
> least
> from the perspective of marketing and promotion.
>
> But it raises the issue of an appropriate animal mascot
> for
> the cover; I can only come up with the Uakari, an
> exotic-looking
> rainforest monkey, which sounds similar to "Curry".
>
> (look here for a picture:)
>
>
Lalit Pant ( lalitp@acm.org ) (alternatively, lalit_pant@yahoo.com ) wrote an article in the May 2000 issue of _Java Report_ entitled "Developing Intelligent Systems With Java and Prolog" that described a Prolog implementation of the A-star search algorithm. Lalit stated that Prolog was useful for algorithm prototyping.
Perhaps Lalit Pant and Simon Peyton Jones could collaborate together on an article, perhaps overseen by Paul Hudak, on prototyping search algorithms in Haskell, also for _The Java Report?_ If this article then had a high readership, maybe the article's success could then justify publication of an O'Reilly _Haskell in a Nutshell_ book?
--Ben
P. S. (Hi Doug Fields. I didn't know that you were reading this mailing list. I guess that I should also greet Professor Paul Hudak: Hello, Professor Hudak. Sorry about Collectively Speaking. How's jazz in general?)
Benjamin L. Russell
russell@brainlink.com
benjamin.russell.es.94@aya.yale.edu
"Furuike ya! Kawazu tobikomu mizu no oto." --Matsuo Basho
From patrick@watson.org Fri Jan 5 15:26:19 2001
From: patrick@watson.org (Patrick M Doane)
Date: Fri, 5 Jan 2001 10:26:19 -0500 (EST)
Subject: Learning Haskell and FP
In-Reply-To: <74096918BE6FD94B9068105F877C002D013784BC@red-pt-02.redmond.corp.microsoft.com>
Message-ID:
On Wed, 3 Jan 2001, Simon Peyton-Jones wrote:
> I'm sure that's right. Trouble is, we're the last people qualified
> to write one!
>
> Here's a suggestion: would someone like to write such a guide,
> from the point of view of a beginner, leaving blanks that we can fill in,
> when you come across a task or issue you don't know the answer
> to? That is, you provide the skeleton, and we fill in the blanks.
I read your paper on taming the Awkward Squad several months ago as my
first exposure to Haskell. I think it is an excellent paper and really
convinced me that Haskell was worthwhile to learn and use.
There are aspects to the paper that are like a tutorial, but I think it
would be overwhelming for a programmer not used to reading papers from
academia.
I think a really good beginner's tutorial on I/O could be started from this
paper:
- Start immediately with using the 'do expression' and don't
worry about the history that led to its development.
- Avoid mentioning 'monad' and other mathematical terms until much
latter in the game. It is better to see the system in action and then
find out it has a nice solid foundation.
Many people are also annoyed by an author using new vocabulary even
if it is well defined. It's better to get them comfortable with the
system first.
- Take advantage of the 2d syntax rules to avoid the unneeded braces
and semicolons. Syntax with little punctuation seems to go a long way
with many programmers. Pointing out the similarities to Python here
could be appropriate.
- After working through several examples, show that the 'do expression'
is short hand for some more basic primitive operators. These
can be more appropriate to use in some circumstances.
- Conclude with explaining the difference between executing an action
and building a value to execute the action. There is no need to
point out that this is a requirement of being a lazy language.
Instead point out the benefits such a system provides to back up
the claim that Haskell truly is "the world's finest
imperative programming language."
Some points would still be difficult to get through:
- Explaining the type system. There is no avoiding this, and users
will have to learn it.
- Working through the difference between 'unit' and 'void'.
Perhaps this can be avoided in a beginning tutorial. A possible
confusing example in the paper is that of composing putChar twice
while throwing away the result. People used to C or Java might
immediately think "but it doesn't have a result to through away!"
- Some amount of understanding for Haskell expressions is going to be
needed to understand examples. An I/O centric tutorial would want
to explain as things go along as needed.
I would avoid other parts of the paper for a first attempt at some new
tutorial material.
Any thoughts? I could take a first stab at this if people think it would
be a useful direction.
Patrick
From zawrotny@gecko.sb.fsu.edu Fri Jan 5 16:01:13 2001
From: zawrotny@gecko.sb.fsu.edu (Michael Zawrotny)
Date: Fri, 05 Jan 2001 11:01:13 -0500
Subject: Learning Haskell and FP
In-Reply-To: Message from "Benjamin L. Russell"
of "Wed, 03 Jan 2001 23:11:38 EST."
Message-ID: <20010105160017.9CB16FCE@www.haskell.org>
"Benjamin L. Russell" wrote
> Michael Zawrotny wrote:
[snip]
> >
> > The thing that I would most like to see would entitled "A
> > Practical Guide to Haskell" or something of that nature.
> >
> > [snip]
> >
> > One is tempted to come to the conclusion that Haskell is
> > not
> > suited for "normal" programmers writing "normal"
> > programs.
>
> How would you define a "'normal' programmer writing 'normal' programs?" What
> exactly is a "'normal' program?"
That was sloppy on my part. My message was getting long, so I used
"normal" as a short cut. I should know better after seeing all of
the flamewars about whether or not FP is suitable for "real" work.
What I meant by "normal programmer" was (surprise) someone like myself.
I.e. someone who doesn't have much, if any background in formal logic,
higher mathematics, proofs of various and sundry things, etc.
By "normal program" I meant things like short utility programs that
I might otherwise write in shell, python, perl, etc. or data extraction
and analysis programs that I might write in in python, perl, C or C++
depending on the type of analysis.
> (Perhaps another way of phrasing the issue is as the "declarative" vs. "proce
> dural" distinction, since the issue seems to be that of "what is" (types) vs.
> "how to" (imperative expression; i.e., procedures).)
Although there is some of that, for me at least, the types aren't a
big deal. The main thing for me is figuring out how to actually get
something done. Most of the things I've read have had tons of really
neat things that you can do with functional abstractions, but it's all
... abstract. I would love to see something that is more about getting
things done in a how-to sense, including IO. Much of the material I've
seen tends to either relegate IO and related topics into a small section
near the end (almost as if it were somehow tainted by not being able
to be modelled as a mathematical function), or it goes into it talking
about monads and combinators and things that make no sense to me.
> While I agree that "A Practical Guide to Haskell" would indeed be a suitable
> alternative for programmers from the procedural school of expression, I would
> caution that such an introduction would probably not be suitable for all.
This is very true. I think that there is plenty of material that would
be helpful for an SMLer to learn Haskell, but not much for someone who
was was only familiar with with C or who was only comfortable with FP
to the extent of understanding lambdas, closures and functions as values,
but none of the more esoteric areas of FP. I'm advocating something
along the lines of the "Practical Guide" I mentioned or the "Nutshell"
book below.
> Perhaps, ideally, two separate tutorials (or perhaps a single tutorial with t
> wo sections based on different viewpoints?) may be needed? The difficulty is
> that the conceptual distance between the declarative and procedural schools
> of thought seems too great to be bridged by a single viewpoint. It seems tha
> t any introduction favoring either one would risk alienating the other.
I agree that any single tutorial would be able to target both audiences.
> Personally, I would really prefer "A Gentle Elementary Introduction to Haskel
> l: Elements of the Haskell School of Expression with Practical Examples," bu
> t some would no doubt choose "Haskell in a Nutshell: How to Write Practical
> Programs in Haskell."
I'm definitely the "Nutshell" type. If it were published, I'd buy
it in a heartbeat. That's why it would be good to have both, everyone
would have one that suited their needs.
I'd like to say a couple things in closing, just so that people don't
get the wrong idea. I like Haskell. Even if I never really write any
programs in it, trying to learn it has given me a different way of
thinking about programming as well as exposing me to some new ideas
and generally broadening my horizons. My only problem is that I just
can't seem to get things done with it. Please note that I am not
saying here, nor did I say previously that it isn't possible to do
things in Haskell. Obviously there are a number of people who can.
I am simply saying that I am having trouble doing it. I would also
like to mention that I really appreciate the helpful and informative
tone on the list, especially on a topic that, even though not intended
that way, could be considered critical of Haskell.
Mike
--
Michael Zawrotny
411 Molecular Biophysics Building
Florida State University | email: zawrotny@sb.fsu.edu
Tallahassee, FL 32306-4380 | phone: (850) 644-0069
From erik@meijcrosoft.com Fri Jan 5 17:10:46 2001
From: erik@meijcrosoft.com (Erik Meijer)
Date: Fri, 5 Jan 2001 09:10:46 -0800
Subject: Learning Haskell and FP
References:
Message-ID: <001601c0773a$72acce30$5d0c1cac@redmond.corp.microsoft.com>
> > But it raises the issue of an appropriate animal mascot
> > for
> > the cover; I can only come up with the Uakari, an
> > exotic-looking
> > rainforest monkey, which sounds similar to "Curry".
> >
> > (look here for a picture:)
> >
> >
Wow, that looks remarkably like me!
Erik
From jans@numeric-quest.com Fri Jan 5 12:36:23 2001
From: jans@numeric-quest.com (Jan Skibinski)
Date: Fri, 5 Jan 2001 07:36:23 -0500 (EST)
Subject: Learning Haskell and FP
In-Reply-To: <20010105160017.9CB16FCE@www.haskell.org>
Message-ID:
On Fri, 5 Jan 2001, Michael Zawrotny wrote:
>I would love to see something that is more about getting
> things done in a how-to sense, including IO. Much of the material I've
> seen tends to either relegate IO and related topics into a small section
> near the end (almost as if it were somehow tainted by not being able
> to be modelled as a mathematical function), or it goes into it talking
> about monads and combinators and things that make no sense to me.
Aside from variety of excellent theoretical papers on "monads and
combinators and things" there are at least three or four down to
earth attepmts to explain IO in simple terms. Check Haskell Wiki
page for references.
Personally, I like the "envelope" analogy, which is good enough
for all practical IO manipulations: you open IO envelope,
manipulate its contents and return the results in another
envelope. Or you just pass unopened envelope around if you
do not care reading its contents. But you are not allowed to
pollute the rest of the program by throwing unwrapped
"notes" around; they must be always enclosed in IO "envelopes".
This is an example of a "howto recipe", as you asked for.
There are plenty of library code and short applications
around where IO monad is used for simple things, like reading
and writing to files, communicating with external processes
(reading a computer clock or something), exchanging information
with web servers (CGI libraries and similar applications), etc.
Some do data acquisition (like reading arrays from files)
some do nothing more but simply write data out, as in my
GD module.
[This is not a self promotion, this is just the simplest possible
useful IO application that comes to my mind now. Check also
"funpdf" - there are plenty examples of reading, writing and
interacting with a purely functional world.]
Jan
From Doug_Ransom@pml.com Fri Jan 5 18:22:44 2001
From: Doug_Ransom@pml.com (Doug Ransom)
Date: Fri, 5 Jan 2001 10:22:44 -0800
Subject: Learning Haskell and FP
Message-ID: <3233BEE02CB3D4118DBA00A0C99869401D622E@hermes.pml.com>
> -----Original Message-----
> From: Michael Zawrotny [mailto:zawrotny@gecko.sb.fsu.edu]
> Sent: Friday, January 05, 2001 8:01 AM
> To: haskell-cafe@haskell.org
> Subject: Re: Learning Haskell and FP
>
>
> "Benjamin L. Russell" wrote
> > Michael Zawrotny wrote:
> [snip]
> > >
> > > The thing that I would most like to see would entitled "A
> > > Practical Guide to Haskell" or something of that nature.
> > >
> > > [snip]
> > >
> > > One is tempted to come to the conclusion that Haskell is
> > > not
> > > suited for "normal" programmers writing "normal"
> > > programs.
> >
> > How would you define a "'normal' programmer writing
> 'normal' programs?" What
> > exactly is a "'normal' program?"
>
> That was sloppy on my part. My message was getting long, so I used
> "normal" as a short cut. I should know better after seeing all of
> the flamewars about whether or not FP is suitable for "real" work.
>
> What I meant by "normal programmer" was (surprise) someone
> like myself.
> I.e. someone who doesn't have much, if any background in
> formal logic,
> higher mathematics, proofs of various and sundry things, etc.
I agree here. Todays software engineers take the tools at their disposal to
make systems as best they can at the lowest price they can. The FP
documentation is not ready -- it is still in the world of academics. The
tools are also not there.
The problem is not that working programmers can not understand necessary
theory to apply techniques. We certainly do not have the time to go through
all sorts of academic papers. We do have the time to take home a text book
and read it over a few weekends. What we need is to be spoon fed the
important knowledge (i.e. in a single textbook). The various Design Pattern
catalogs do this for OO. FP is a little more complicated, but I think there
is much potential to get more work done in the same time if we could learn
to apply it.
>
> By "normal program" I meant things like short utility programs that
> I might otherwise write in shell, python, perl, etc. or data
> extraction
> and analysis programs that I might write in in python, perl, C or C++
> depending on the type of analysis.
For the sake of diversity, a normal program to me inludes everything from
XML processing (which I do a fair bit), database manipulation, delivery of
historical and current (i.e. "real time" or immediate values) to users, and
this one is real key: scalable programs for 3 tiered architectures. The
last one is real interesting. For those familliar with COM+ or MTS (or the
Java equivelent), the middle or business tier typically contains objects
which behave as functions -- when a client calls a middle tier object, the
state of that middle tier object is cruft once the call completes. I think
what is called "business logic" in the "real world" of delivering systems
would be an excellent place to us FP.
>
> > (Perhaps another way of phrasing the issue is as the
> "declarative" vs. "proce
> > dural" distinction, since the issue seems to be that of
> "what is" (types) vs.
> > "how to" (imperative expression; i.e., procedures).)
>
> Although there is some of that, for me at least, the types aren't a
> big deal. The main thing for me is figuring out how to actually get
> something done. Most of the things I've read have had tons of really
> neat things that you can do with functional abstractions, but it's all
> ... abstract. I would love to see something that is more
> about getting
> things done in a how-to sense, including IO. Much of the
> material I've
> seen tends to either relegate IO and related topics into a
> small section
> near the end (almost as if it were somehow tainted by not being able
> to be modelled as a mathematical function), or it goes into it talking
> about monads and combinators and things that make no sense to me.
>
> > While I agree that "A Practical Guide to Haskell" would
> indeed be a suitable
> > alternative for programmers from the procedural school of
> expression, I would
> > caution that such an introduction would probably not be
> suitable for all.
>
> This is very true. I think that there is plenty of material
> that would
> be helpful for an SMLer to learn Haskell, but not much for someone who
> was was only familiar with with C or who was only comfortable with FP
> to the extent of understanding lambdas, closures and
> functions as values,
> but none of the more esoteric areas of FP. I'm advocating something
> along the lines of the "Practical Guide" I mentioned or the "Nutshell"
> book below.
>
> > Perhaps, ideally, two separate tutorials (or perhaps a
> single tutorial with t
> > wo sections based on different viewpoints?) may be needed?
> The difficulty is
> > that the conceptual distance between the declarative and
> procedural schools
> > of thought seems too great to be bridged by a single
> viewpoint. It seems tha
> > t any introduction favoring either one would risk
> alienating the other.
>
> I agree that any single tutorial would be able to target both
> audiences.
>
> > Personally, I would really prefer "A Gentle Elementary
> Introduction to Haskel
> > l: Elements of the Haskell School of Expression with
> Practical Examples," bu
> > t some would no doubt choose "Haskell in a Nutshell: How
> to Write Practical
> > Programs in Haskell."
>
> I'm definitely the "Nutshell" type. If it were published, I'd buy
> it in a heartbeat. That's why it would be good to have both, everyone
> would have one that suited their needs.
>
> I'd like to say a couple things in closing, just so that people don't
> get the wrong idea. I like Haskell. Even if I never really write any
> programs in it, trying to learn it has given me a different way of
> thinking about programming as well as exposing me to some new ideas
> and generally broadening my horizons. My only problem is that I just
> can't seem to get things done with it. Please note that I am not
> saying here, nor did I say previously that it isn't possible to do
> things in Haskell. Obviously there are a number of people who can.
> I am simply saying that I am having trouble doing it. I would also
> like to mention that I really appreciate the helpful and informative
> tone on the list, especially on a topic that, even though not intended
> that way, could be considered critical of Haskell.
>
>
> Mike
>
> --
> Michael Zawrotny
> 411 Molecular Biophysics Building
> Florida State University | email: zawrotny@sb.fsu.edu
> Tallahassee, FL 32306-4380 | phone: (850) 644-0069
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
From wohlstad@cs.ucdavis.edu Sun Jan 7 02:34:54 2001
From: wohlstad@cs.ucdavis.edu (Eric Allen Wohlstadter)
Date: Sat, 6 Jan 2001 18:34:54 -0800 (PST)
Subject: more HDirect confusion
Message-ID:
Things that are popping up when I try to use HDirect:
1. I don't have a "com" directory under "ghc/lib/imports". So, I made one
and copied the files from "hdirect-0.17/lib" into it. I don't know if
that's right.
2. I don't have a "misc" package anywhere. The makefiles complain about
"-syslib misc".
3. I don't have a file called "HScom_imp.a". ld complains about
"-lHScom_imp.a".
I thought I installed everything correctly. I did "make boot", and then
"make", and then "make lib".
Thanks,
Eric
From sebc@posse42.net Mon Jan 8 11:07:17 2001
From: sebc@posse42.net (Sebastien Carlier)
Date: Mon, 8 Jan 2001 12:07:17 +0100
Subject: Extending the do-notation
Message-ID: <20010108110721.A1AB21027@www.haskell.org>
> > I'm constantly amazed by the number of tricks one has
> > to know before he can write concise code using the
> > do-notation (among other things, I used to write
> > "x <- return $ m" instead of "let x = m").
> [snip]
> Why do you WANT to write concise code using the do-notation?
> Has someone revived the Obfuscated Haskell Contest, or
> do you find touch-typing difficult?
Which of the following is easier to read (and please forgive
the short variable names) ?
> x <- return $ m
or
> let x = m
> x <- m
> let (a, b) = unzip x
> ... -- (and this code uses an extra variable)
or
> (a, b) <- unzip `liftM` m
Concise does not mean obfuscated. Unnecessarily inflating your code
will not make it more readable. Or am I wrong ?
From simonpj@microsoft.com Mon Jan 8 13:54:09 2001
From: simonpj@microsoft.com (Simon Peyton-Jones)
Date: Mon, 8 Jan 2001 05:54:09 -0800
Subject: Extending the do-notation
Message-ID: <74096918BE6FD94B9068105F877C002D355B19@red-pt-02.redmond.corp.microsoft.com>
| Another question concerning the do-notation: I noticed
| that most parts of ghc do not use it. Is it because
| the code was written before the notation was available,
| because the do-notation is too weak to express these
| parts, or for another fundamental reason ?
The former: mostly written before do-notation existed.
We're gradually migrating!
Simon
From russell@brainlink.com Mon Jan 8 18:30:44 2001
From: russell@brainlink.com (Benjamin L. Russell)
Date: Mon, 08 Jan 2001 13:30:44 -0500
Subject: Learning Haskell and FP
In-Reply-To:
Message-ID:
On Fri, 5 Jan 2001 10:26:19 -0500 (EST)
Patrick M Doane wrote:
>
> [snip]
>
> I think a really good beginner's tutorial on I/O could be
> started from this
> paper:
>
> - Start immediately with using the 'do expression' and
> don't
> worry about the history that led to its development.
Actually, the history, especially from a comparative programming languages standpoint, can sometimes be useful for motivation.
For example, many Java textbooks motivated study of the language by explaining the need for a C-style language without explicit memory allocation or explicit pointer casting. Similarly, an on-line document for C# motivated it by explaining the history of how it grew out of a need for a language similar to C and C++ (the document somehow left out the Java comparison :-( ), but that allowed programmers to develop more efficiently in it.
Even for a "Haskell in a Nutshell"-style textbook, a couple of paragraphs comparing Haskell to other languages from a historical viewpoint and describing the advantages and disadvantages of Haskell in particular could prove quite useful.
> [snip]
>
> Many people are also annoyed by an author using new
> vocabulary even
> if it is well defined. It's better to get them
> comfortable with the
> system first.
That depends on which new vocabulary is being mentioned, though. That may true for unnecessary new vocabulary, such as "monads" for the first chapter. However, e.g. in the following example (borrowed from Chapter 3 of _A Gentle Introduction to Haskell, Version 98,_ by Paul Hudak):
add :: Integer -> Integer -> Integer
add x y = x + y
it is hard not to introduce such vocabulary as "has type," "arrow" (or "mapping"), and maybe even "currying."
> [snip]
>
> - Conclude with explaining the difference between
> executing an action
> and building a value to execute the action. There is
> no need to
> point out that this is a requirement of being a lazy
> language.
> Instead point out the benefits such a system
> provides to back up
> the claim that Haskell truly is "the world's finest
> imperative programming language."
Forgive me if I am ignorant, but who claimed that Haskell was an "imperative" language?
Also, in order to take full advantage of Haskell, it would seem necessary to get used to functional programming style (the Haskell school of expression, in particular). It seems that using Haskell as an "imperative" language is a bit like thinking in C when programming in C++; only worse, since the imperative habits are being brought into the functional, rather than the OO, realm.
--Ben
--
Benjamin L. Russell
russell@brainlink.com
benjamin.russell.es.94@aya.yale.edu
"Furuike ya! Kawazu tobikomu mizu no oto." --Matsuo Basho
From erik@meijcrosoft.com Mon Jan 8 20:06:35 2001
From: erik@meijcrosoft.com (Erik Meijer)
Date: Mon, 8 Jan 2001 12:06:35 -0800
Subject: Learning Haskell and FP
References:
Message-ID: <001901c079ae$829eced0$5d0c1cac@redmond.corp.microsoft.com>
> Forgive me if I am ignorant, but who claimed that Haskell was an
"imperative" language?
>
> Also, in order to take full advantage of Haskell, it would seem necessary
to
> get used to functional programming style (the Haskell school of
expression, in particular).
> It seems that using Haskell as an "imperative" language is a bit like
thinking in C
> when programming in C++; only worse, since the imperative habits are being
>brought into the functional, rather than the OO, realm.
Nope, I also think that Haskell is the world's finest *imperative* language
(and the world's best functional language as well). The beauty of monads is
that you can encapsulate imperative actions as first class values, ie they
have the same status as functions, lists, ... Not many other imperative
languages have statements as first class citizens.
Erik
From theo@engr.mun.ca Mon Jan 8 21:18:14 2001
From: theo@engr.mun.ca (Theodore Norvell)
Date: Mon, 08 Jan 2001 17:48:14 -0330
Subject: Learning Haskell and FP
References: <001901c079ae$829eced0$5d0c1cac@redmond.corp.microsoft.com>
Message-ID: <3A5A2E96.CE5336DC@engr.mun.ca>
Erik Meijer wrote:
> Nope, I also think that Haskell is the world's finest *imperative* language
> (and the world's best functional language as well). The beauty of monads is
> that you can encapsulate imperative actions as first class values, ie they
> have the same status as functions, lists, ... Not many other imperative
> languages have statements as first class citizens.
It may be the only imperative language that doesn't have mutable variables
as a standard part of the language. :-)
I do agree that Haskell has a lot of nice imperative features, but it
is also missing a few that are fundamental to imperative programming.
Personally, I'd love to see a language that is imperative from the
ground up, that has some of the design features of Haskell (especially
the type system), but I don't think that Haskell is that language (yet?).
A question for the list: Is there a book that gives a good introduction
to Hindley-Milner typing theory and practice, and that delves into its
various extensions (e.g. imperative programs, type classes, record types).
I have Mitchell's book out from the library, but it seems a bit limited
with respect to extentions (I think it deals with subtypes, but not
type classes and mutable variables, for example).
Cheers,
Theodore Norvell
----------------------------
Dr. Theodore Norvell theo@engr.mun.ca
Electrical and Computer Engineering http://www.engr.mun.ca/~theo
Engineering and Applied Science Phone: (709) 737-8962
Memorial University of Newfoundland Fax: (709) 737-4042
St. John's, NF, Canada, A1B 3X5
From Tom.Pledger@peace.com Mon Jan 8 22:04:38 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Tue, 9 Jan 2001 11:04:38 +1300
Subject: Haskell Language Design Questions
In-Reply-To: <3233BEE02CB3D4118DBA00A0C99869401D61F7@hermes.pml.com>
References: <3233BEE02CB3D4118DBA00A0C99869401D61F7@hermes.pml.com>
Message-ID: <14938.14710.72751.416245@waytogo.peace.co.nz>
Doug Ransom writes:
[...]
> 2. It seems to me that the Maybe monad is a poor substitute for
> exception handling because the functions that raise errors may not
> necessarily support it.
It sometimes helps to write such functions for monads in general,
rather than for Maybe in particular. Here's an example adapted from
the standard List module:
findIndex :: (a -> Bool) -> [a] -> Maybe Int
findIndex p xs = case findIndices p xs of
(i:_) -> Just i
[] -> Nothing
It generalises to this:
findIndex :: Monad m => (a -> Bool) -> [a] -> m Int
findIndex p xs = case findIndices p xs of
(i:_) -> return i
[] -> fail "findIndex: no match"
The price of the generalisation is that you may need to add some type
signatures to resolve any new overloading at the top level of the
module.
Regards,
Tom
From nordland@cse.ogi.edu Mon Jan 8 23:42:16 2001
From: nordland@cse.ogi.edu (Johan Nordlander)
Date: Mon, 08 Jan 2001 15:42:16 -0800
Subject: Are anonymous type classes the right model at all? (replying to Re:
Are fundeps the right model at all?)
References:
<3A42661E.7FCCAFFA@tzi.de>
<20001229003745.A11084@mark.ugcs.caltech.edu>
<14938.12684.532876.890748@waytogo.peace.co.nz>
Message-ID: <3A5A5058.8745897D@cse.ogi.edu>
Tom Pledger wrote:
>
> Marcin 'Qrczak' Kowalczyk writes:
> [...]
> > My new record scheme proposal does not provide such lightweight
> > extensibility, but fields can be added and deleted in a controlled
> > way if the right types and instances are made.
>
> Johan Nordlander must be on holiday or something, so I'll deputise for
> him. :-)
No holiday in sight, I'm afraid :-) I just managed to resist the
temptation of throwing in another ad for O'Haskell. But since my name
was brought up...
> O'Haskell also has add-a-field subtyping. Here's the coloured point
> example (from http://www.cs.chalmers.se/~nordland/ohaskell/survey.html):
>
> struct Point =
> x,y :: Float
>
> struct CPoint < Point =
> color :: Color
>
> Regards,
> Tom
Notice though that O'Haskell lacks the ability delete fields, which I
think is what Marcin also proposes. I've avoided such a feature in
O'Haskell since it would make the principal type of an expression
sensitive to future type declarations. For example, assuming we have
f p = p.x ,
its principal type would be Point -> Float if only the type definitions
above are in scope, but OneDimPoint -> Float in another scope where some
type OneDimPoint is defined to be Point with field y deleted.
-- Johan
From schulzs@uni-freiburg.de Wed Jan 10 20:53:51 2001
From: schulzs@uni-freiburg.de (Sebastian Schulz)
Date: Wed, 10 Jan 2001 20:53:51 +0000
Subject: ANNOUNCE: Draft TOC of Haskell in a Nutshell
References:
Message-ID: <3A5CCBDF.59806FFE@shamoha.de>
"Benjamin L. Russell" wrote:
>
> On Tue, 9 Jan 2001 09:00:27 +0100 (MET)
> Johannes Waldmann wrote:
> >
> > This could be driven to the extreme: not only hide the
> > word "monad",
> > but also "functional". The title would be "Imperative
> > programming in Haskell"
> > (as S. Peyton Jones says in Tackling the Awkward Squad:
> > "Haskell is the world's finest imperative programming
> > language").
>
> Couldn't this choice potentially backfire, though? For example, many people choose Java over C because they prefer OO to straight imperative programming, which they see at The Old Way.
>
> If I went to a bookstore and saw one book entitled, "Imperative Programming in Haskell," and another entitled, "OO Programming in Java," I wouldn't buy the Haskell book, especially if had already had a bad experience with imperative programming in C.
>
> How about, "The Post-OO Age: Haskell: Back to the Future in Imperative Programming"?
I didn`t follow this discussion very closely, but:
Hey! What`s so evil in the word "functional"??!
Haskell was the first language I learned (to love;-) and for me it's
more difficult to think imperative (e.g. when I have to do some homework
in Java).
In that bookstore, I would buy a book "Functional Programming in Java"
:) .
But serious, I don`t think that it is good to hide the fact that Haskell
is a functional Language. Nobody will realize how comfortable and
elegant the functional way is, when he is still thinking: "Wow, how
complicate to program imperative with this functional syntax".
regards
Sebastian
From mpj@cse.ogi.edu Wed Jan 10 20:08:17 2001
From: mpj@cse.ogi.edu (Mark P Jones)
Date: Wed, 10 Jan 2001 12:08:17 -0800
Subject: Hugs
In-Reply-To: <50.fdb0f92.278e1768@aol.com>
Message-ID:
| I've currently installed Hugs on my PC, could you tell me how
| I can configure Hugs to use an editor. The editor I have got
| installed on my computer is winedt.
This question is answered in the Hugs manual, Section 4 (or
pages 11-13 in the pdf version).
Please note also that questions that are about using Hugs should
be sent to the hugs-users mailing list, and not to the Haskell
list.
All the best,
Mark
From kort@wins.uva.nl Mon Jan 15 19:24:08 2001
From: kort@wins.uva.nl (Jan Kort)
Date: Mon, 15 Jan 2001 20:24:08 +0100
Subject: gui building in haskell
References: <3A5F80BE.75F674B3@cadmos.com>
Message-ID: <3A634E58.D351C6A3@wins.uva.nl>
Matthew Liberty wrote:
>
> Greetings,
>
> I've been looking at http://www.haskell.org/libraries/#guis and trying
> to figure out which package is "best" for building a gui. Can anyone
> give a comparison of the strengths/weaknesses of these packages (or any
> others)?
Hi Matt,
When Manuel's Haskell GTK+ binding (gtkhs) is finished it will
be really cool.
On top of gtkhs there are/will be many other libraries and tools:
- iHaskell: a high level GUI library that avoids the eventloop
mess.
- GtkGLArea: 3D graphics in your GTK+ application using Sven's HOpenGL.
- GUI painter: All you need is a backend for Glade. I'm currently
working on this. Or at least I was half a year ago, other (even
more interesting) things claimed most of my spare time.
Of course this will all take a while to come to a beta release.
FranTk is currently the only high level interface that
is in beta. There is no release for ghc4.08, but it
takes only a day or so to get it working.
In short: use FranTk now or wait a year and use gtkhs.
The reason there are so many GUI libraries on the web page is
that nothing ever gets removed. Most of the libraries are
no longer supported.
I only realized after making the table that it was a bit
redundant, but since I spend 10 mins getting the stuff
lined up properly (please don't tell me you have a
proportional font) you'll have to suffer through it:
+------------+--------+-------+-----+------+------+
| GUI lib | Status | Level | W98 | Unix | OS X |
+------------+--------+-------+-----+------+------+
| TclHaskell | beta | low | yes | yes | |
| Fudgets | alpha*)| high | no | yes | |
| gtkhs | alpha | low | | yes | |
| iHaskell | | high | | | |
| FranTk | beta | high | yes | yes | |
+------------+--------+-------+-----+------+------+
| Haggis | dead | high | no | yes | |
| Haskell-Tk | dead | | | | |
| Pidgets | dead | | | | |
| Budgets | dead | | | | |
| EmbWin | dead | | | | |
| Gadgets | dead | | | | |
+------------+--------+-------+-----+------+------+
*) I thought Fudgets was dead, but apparently it has been revived.
You might want to check that out too, although it has the words
"hackers release" all over it (literaly).
Status: alpha/beta
I guess "dead" sounds a bit rude. It means that I couldn't
find a distribution or it would take too much effort to get
it working.
Level: low: Raw calls to C library.
high: More functional style.
W98: Wether it works on Windows 95/98/ME with ghc4.08.
Unix: Same for Unix (Solaris in my case).
OS X: Same for Macintosh OS X.
Hope this helps,
Jan
From qrczak@knm.org.pl Mon Jan 15 19:49:05 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 15 Jan 2001 19:49:05 GMT
Subject: Are fundeps the right model at all?
References:
Message-ID:
Thanks for the reply!
Mon, 15 Jan 2001 02:01:18 -0800, Mark P Jones pisze:
> 1) "A weaker notion of ambiguity" (title of Section 5.8.3 in my dissertation,
> which is where I think the following idea originated): We can modify the
> definition of ambiguity for certain kinds of class constraint by considering
> (for example) only certain subsets of parameters. In this setting, a type
> P => t is ambiguous if and only if there is a variable in AV(P) that is not
> in t. The AV function returns the set of potentially ambiguous variables in
> the predicates P. It is defined so that AV(Eq t) = TV(t), but also can
> accommodate things like AV(Has_field r a) = TV(r). A semantic justification
> for the definition of AV(...) is needed in cases where only some parameters
> are used; this is straightforward in the case of Has_field-like classes.
> Note that, in this setting, the only thing that changes is the definition
> of an ambiguous type.
This is exactly what I have in mind.
Perhaps extended a bit, to allow multiple values of AV(P). During
constraint solving unambiguous choices for the value of AV(P) must
unify to a common answer, but ambiguous choices are not considered
(or something like that).
I don't see a concrete practical example for this extension yet, so
it may be an unneeded complication, but I can imagine two parallel
sets of types with a class expressing a bijection between them:
a type from either side is enough to determine the other. This is
what fundeps can do, basic (1) cannot, but extended (1) can - with a
different treatment of polymorphic types than fundeps, i.e. allowing
some functions which are not necessarily bijections.
> - When a user writes an instance declaration:
>
> instance P => C t1 ... tn where ...
>
> you treat it, in the notation of (2) above, as if they'd written:
>
> instance P => C t1 ... tn
> improves C t11 ... t1n, ..., C tm1 ... tmn where ...
>
> Here, m is the number of keys, and: tij = ti, if parameter i is in key j
> = ai, otherwise
> where a1, ..., an are distinct new variables.
Sorry, I don't understand (2) well enough to see if this is the case.
Perhaps it is.
> - Keys will not give you the full functionality of functional dependencies,
> and that missing functionality is important in some cases.
And vice versa. Plain fundeps can't allow both
Has_parens r a => r
as an unambiguous type and
Has_parens TokenParser (Parser a -> Parser a)
as an instance.
> PS. If we're going to continue this discussion any further, let's
> take it over into the haskell-cafe ...
OK.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From ashley@semantic.org Mon Jan 15 21:02:53 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Mon, 15 Jan 2001 13:02:53 -0800
Subject: gui building in haskell
Message-ID: <200101152102.NAA22728@mail4.halcyon.com>
At 2001-01-15 11:24, Jan Kort wrote:
>When Manuel's Haskell GTK+ binding (gtkhs) is finished it will
>be really cool.
>
>On top of gtkhs there are/will be many other libraries and tools:
>- iHaskell: a high level GUI library that avoids the eventloop
> mess.
>- GtkGLArea: 3D graphics in your GTK+ application using Sven's HOpenGL.
>- GUI painter: All you need is a backend for Glade. I'm currently
> working on this. Or at least I was half a year ago, other (even
> more interesting) things claimed most of my spare time.
Would some kind Haskell-to-Java bridge be a cost-effective way of
providing a multi-platform GUI library, as well as network, SQL, RMI
etc., etc.?
It doesn't necessarily imply compiling to the JVM. Java could simply see
the compiled Haskell code through the JNI.
--
Ashley Yakeley, Seattle WA
From kort@wins.uva.nl Tue Jan 16 10:48:07 2001
From: kort@wins.uva.nl (Jan Kort)
Date: Tue, 16 Jan 2001 11:48:07 +0100
Subject: gui building in haskell
References: <200101152102.NAA22728@mail4.halcyon.com>
Message-ID: <3A6426E7.833AD503@wins.uva.nl>
Ashley Yakeley wrote:
> Would some kind Haskell-to-Java bridge be a cost-effective way of
> providing a multi-platform GUI library, as well as network, SQL, RMI
> etc., etc.?
>
> It doesn't necessarily imply compiling to the JVM. Java could simply see
> the compiled Haskell code through the JNI.
That sounds unlikely to me, how do you overide methods through JNI ?
The only way I can see this working is the way Mondrian does it:
make a more object oriented Haskell and compile to Java. I don't
think Mondrian is anywhere near an alpha release though.
Jan
From elke.kasimir@catmint.de Tue Jan 16 13:48:41 2001
From: elke.kasimir@catmint.de (Elke Kasimir)
Date: Tue, 16 Jan 2001 14:48:41 +0100 (CET)
Subject: gui building in haskell
In-Reply-To: <3A6426E7.833AD503@wins.uva.nl>
Message-ID:
On 16-Jan-2001 Jan Kort wrote:
(interesting stuff snipped...)
> Ashley Yakeley wrote:
>> Would some kind Haskell-to-Java bridge be a cost-effective way of
>> providing a multi-platform GUI library, as well as network, SQL, RMI
>> etc., etc.?
>>
>> It doesn't necessarily imply compiling to the JVM. Java could simply see
>> the compiled Haskell code through the JNI.
>
> That sounds unlikely to me, how do you overide methods through JNI ?
> The only way I can see this working is the way Mondrian does it:
> make a more object oriented Haskell and compile to Java. I don't
> think Mondrian is anywhere near an alpha release though.
Aside: I often think that the Java-GUI, SQL and etc. stuff is also anywhere
near an alpha release ...
Best,
Elke
>
> Jan
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
---
"If you have nothing to say, don't do it here..."
Elke Kasimir
Skalitzer Str. 79
10997 Berlin (Germany)
fon: +49 (030) 612 852 16
mail: elke.kasimir@catmint.de>
see:
for pgp public key see:
From C.Reinke@ukc.ac.uk Tue Jan 16 16:11:53 2001
From: C.Reinke@ukc.ac.uk (C.Reinke)
Date: Tue, 16 Jan 2001 16:11:53 +0000
Subject: Too Strict?
In-Reply-To: Message from "Steinitz, Dominic J"
of "16 Jan 2001 10:27:32." <"0168C3A642214195*/c=GB/admd=ATTMAIL/prmd=BA/o=British Airways PLC/ou=CORPLN1/s=Steinitz/g=Dominic/i=J/"@MHS>
Message-ID:
Dominic,
> What I can't see at the moment is how to keep what I was doing modular. I had
> a module Anonymize, the implementation of which I wanted to change without
> the user of it having to change their code. The initial implementation was a
> state monad which generated a new string every time it needed one but if it
> was a string it had already anonymized then it looked it up in the state. I
> initially used a list but with 100,000+ strings it took a long time. The next
> implementation used FiniteMap which improved things considerably. I only had
> to make three changes in Anonymize and none in Main. Using MD5 is quicker
> still but isn't so good from the program maintenance point of view.
my first stab at the modularity issue was the version _2 in my last message.
Looking back at the Anonymizable class and instances in your full program,
type Anon a = IO a
class Anonymizable a where
anonymize :: a -> Anon a
-- MyString avoids overlapping instances of Strings
-- with the [Char]
data MyString = MyString String
deriving Show
instance Anonymizable MyString where
anonymize (MyString x)
= do s <- digest x
return ((MyString . showHex') s)
instance Anonymizable a => Anonymizable [a] where
anonymize xs = mapM anonymize xs
the problem is in the Anonymizable instance for [a]: the mapM in anonymize
constructs an IO script, consisting of some IO operation for each list element,
all chained together into a monolithic whole.
As IO a is an abstract type, this is a bit too restrictive to be modular: if I
ever want any of the anonymized Strings, I can only get a script that
anonymizes them all - before executing that script, I don't have any anonymized
Strings, and after executing the script, all of them have been processed.
This forestalls any attempt to interleave the anonymization with some further
per-element processing. Instead, I would prefer to have a list of IO actions,
not yet chained together (after all, in Haskell, they are just data items), but
that doesn't fit the current return type of anonymize. One approach would be
to change the type of Anon a to [IO a], or to ignore the [a] instance and use
the MyString instance only, but the longer I look at the code, the less I'm
convinced that the overloading is needed at all.
Unless there are other instances of Anonymizable, why not simply have a
function anonymize :: String -> Anon String ? That would still allow you to
hide the implementation decisions you mentioned (even in a separate module),
provided that any extra state you need can be kept in the IO monad.
One would have to write mapM anonymize explicitly where you had simply
anonymize, but it would then become possible to do something else with the list
of IO actions before executing them (in this case, to interleave the printing
with the anonymization).
First, here is the interesting fragment with the un-overloaded anonymize:
readAndWriteAttrVals =
do h <- openFile fileout WriteMode
s <- readFile filename
a <- mapM anonymize (lines s)
hPutStr h (unlines a)
It is now possible to import anonymize from elsewhere and do the interleaving
in the code that uses anonymize:
readAndWriteAttrVals =
do h <- openFile fileout WriteMode
s <- readFile filename
let action line = do
{ a <- anonymize l
; hPutStr h a
}
mapM_ action (lines s)
Would that work for your problem? Alternatively, if some of your implementation
options require initialization or cleanup, your Anonymize module could offer a
function to process all lines, with a hook for per-line processing:
processLinesWith perLineAction ls =
do
{ initialize
; as <- mapM action ls
; cleanup
; return as
}
where
action l = do { a <- anonymize l ; perLineAction a }
Then the code in the client module could simply be:
readAndWriteAttrVals =
do h <- openFile fileout WriteMode
s <- readFile filename
processLinesWith (hPutStr h) (lines s)
return ()
Closing the loop, one could now redefine the original, overloaded anonymize to
take a perLineAction, with the obvious instances for MyString and [a], but I
really don't see why every function should have to be called anonymize?-)
Claus
PS The simplified code of the new variant, for observation:
module Main(main) where
import Observe
import IO(openFile,
hPutStr,
IOMode(ReadMode,WriteMode,AppendMode))
filename = "ldif1.txt"
fileout = "ldif.out"
readAndWriteAttrVals =
do h <- openFile fileout WriteMode
s <- readFile filename
let { anonymize s = return (':':s)
; action l = do
{ a <- anonymize l
; hPutStr h a
}
}
mapM_ (observe "action" action) (lines s)
main = runO readAndWriteAttrVals
From ashley@semantic.org Tue Jan 16 21:13:30 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Tue, 16 Jan 2001 13:13:30 -0800
Subject: gui building in haskell
Message-ID: <200101162113.NAA12234@mail4.halcyon.com>
At 2001-01-16 02:48, Jan Kort wrote:
>Ashley Yakeley wrote:
>> Would some kind Haskell-to-Java bridge be a cost-effective way of
>> providing a multi-platform GUI library, as well as network, SQL, RMI
>> etc., etc.?
>>
>> It doesn't necessarily imply compiling to the JVM. Java could simply see
>> the compiled Haskell code through the JNI.
>
>That sounds unlikely to me, how do you overide methods through JNI ?
You create a stub Java class with methods declared 'native'. Anything you
can do in Java, you can do in the JNI.
--
Ashley Yakeley, Seattle WA
From Tom.Pledger@peace.com Tue Jan 16 22:04:40 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Wed, 17 Jan 2001 11:04:40 +1300
Subject: O'Haskell OOP Polymorphic Functions
In-Reply-To: <200101162127.NAA14109@mail4.halcyon.com>
References: <200101162127.NAA14109@mail4.halcyon.com>
Message-ID: <14948.50552.476451.139539@waytogo.peace.co.nz>
Ashley Yakeley writes:
> At 2001-01-16 13:18, Magnus Carlsson wrote:
>
> >f1 = Just 3
> >f2 = f3 = f4 = Nothing
>
> So I've declared b = d, but 'theValue b' and 'theValue d' are different
> because theValue is looking at the static type of its argument?
>
> What's to stop 'instance TheValue Base' applying in 'theValue d'?
The subtyping (struct Derived < Base ...) makes the two instances
overlap, with 'instance TheValue Derived' being strictly more specific
than 'instance TheValue Base'. If the system preferred the less
specific one, the more specific one would never be used.
This is quite similar to the way overlapping instances are handled
when they occur via substitutions for type variables (e.g. 'instance C
[Char]' is strictly more specific than 'instance C [a]') in
implementations which support than language extension.
Regards,
Tom
From ashley@semantic.org Tue Jan 16 22:20:50 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Tue, 16 Jan 2001 14:20:50 -0800
Subject: O'Haskell OOP Polymorphic Functions
Message-ID: <200101162220.OAA21394@mail4.halcyon.com>
At 2001-01-16 14:04, Tom Pledger wrote:
>The subtyping (struct Derived < Base ...) makes the two instances
>overlap, with 'instance TheValue Derived' being strictly more specific
>than 'instance TheValue Base'. If the system preferred the less
>specific one, the more specific one would never be used.
>
>This is quite similar to the way overlapping instances are handled
>when they occur via substitutions for type variables (e.g. 'instance C
>[Char]' is strictly more specific than 'instance C [a]') in
>implementations which support than language extension.
Subtyping-overlapping is quite different from type-substitution
overlapping.
Consider:
struct B
struct D1 < Base =
a1 :: Int
struct D2 < Base =
a2 :: Int
class TheValue a where theValue :: a -> Int
instance TheValue B where theValue _ = 0
instance TheValue D1 where theValue _ = 1
instance TheValue D2 where theValue _ = 2
struct M < D1,D2
m = struct
a1 = 0
a2 = 0
f = theValue m
What's the value of f?
--
Ashley Yakeley, Seattle WA
From Tom.Pledger@peace.com Tue Jan 16 22:36:06 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Wed, 17 Jan 2001 11:36:06 +1300
Subject: O'Haskell OOP Polymorphic Functions
In-Reply-To: <200101162220.OAA21394@mail4.halcyon.com>
References: <200101162220.OAA21394@mail4.halcyon.com>
Message-ID: <14948.52438.917832.297297@waytogo.peace.co.nz>
Ashley Yakeley writes:
[...]
> Subtyping-overlapping is quite different from type-substitution
> overlapping.
Different, but with some similarities.
> Consider:
>
> struct B
>
> struct D1 < Base =
> a1 :: Int
>
> struct D2 < Base =
> a2 :: Int
>
> class TheValue a where theValue :: a -> Int
> instance TheValue B where theValue _ = 0
> instance TheValue D1 where theValue _ = 1
> instance TheValue D2 where theValue _ = 2
>
> struct M < D1,D2
>
> m = struct
> a1 = 0
> a2 = 0
>
> f = theValue m
>
> What's the value of f?
Undefined, because neither of the overlapping instances is strictly
more specific than the other. I hope that would cause a static error,
anywhere that 'instance TheValue D1', 'instance TheValue D2', and
'struct M' are all in scope together.
Here's a similar example using type-substitution overlapping:
instance TheValue Char where ...
instance Monad m => TheValue (m Char) where ...
instance TheValue a => TheValue (Maybe a) where ...
trouble = theValue (Just 'b')
Regards,
Tom
From ashley@semantic.org Tue Jan 16 22:52:35 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Tue, 16 Jan 2001 14:52:35 -0800
Subject: O'Haskell OOP Polymorphic Functions
Message-ID: <200101162252.OAA25782@mail4.halcyon.com>
At 2001-01-16 14:36, Tom Pledger wrote:
>Here's a similar example using type-substitution overlapping:
>
> instance TheValue Char where ...
> instance Monad m => TheValue (m Char) where ...
> instance TheValue a => TheValue (Maybe a) where ...
>
> trouble = theValue (Just 'b')
Apparently this is not good Haskell syntax. I tried compiling this in
Hugs:
class TheValue a where theValue :: a -> Int
instance TheValue Char where theValue _ = 0
instance (Monad m) => TheValue (m Char) where theValue _ = 1 --
error here
instance (TheValue a) => TheValue (Maybe a) where theValue _ = 2
trouble = theValue (Just 'b')
I got a syntax error:
(line 3): syntax error in instance head (variable expected)
--
Ashley Yakeley, Seattle WA
From chak@cse.unsw.edu.au Tue Jan 16 23:33:30 2001
From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty)
Date: Tue, 16 Jan 2001 23:33:30 GMT
Subject: gui building in haskell
In-Reply-To: <3A634E58.D351C6A3@wins.uva.nl>
References: <3A5F80BE.75F674B3@cadmos.com>
<3A634E58.D351C6A3@wins.uva.nl>
Message-ID: <20010116233330K.chak@cse.unsw.edu.au>
Jan Kort wrote,
> +------------+--------+-------+-----+------+------+
> | GUI lib | Status | Level | W98 | Unix | OS X |
> +------------+--------+-------+-----+------+------+
[..]
> | gtkhs | alpha | low | | yes | |
[..]
Given the current cross-platform efforts of GTK+, I think,
this is overly pessimistic. Currently, GTK+ runs at least
also on BeOS, there is also a Win98 version, and in addition
to the X interface on Unix, there is now also direct support
for the Linux framebuffer device.
Gtk+HS has only been tested on Unix, but it doesn't contain
anything Unix specific, I think - except that it makes use
of the usual GNU build tools like autoconf and gmake.
Cheers,
Manuel
From fruehr@willamette.edu Wed Jan 17 00:38:13 2001
From: fruehr@willamette.edu (Fritz Ruehr)
Date: Tue, 16 Jan 2001 16:38:13 -0800
Subject: Learning Haskell and FP
In-Reply-To: <001901c079ae$829eced0$5d0c1cac@redmond.corp.microsoft.com>
Message-ID:
Erik Meijer said:
> Not many other imperative languages have statements as first class citizens.
I don't have the details here (e.g., an Algol 68 report), but Michael Scott
reports in his "Programming Language Pragmatics" text (p. 279) that:
"Algol 68 [allows], in essence, indexing into an array of statements,
but the syntax is rather cumbersome."
This is in reference to historical variations on switch-like statements (and
consistent with a running theme, typical in PL texts, about the extremes of
orthogonality found in Algol 68).
-- Fritz Ruehr
fruehr@willamette.edu
From jf15@hermes.cam.ac.uk Wed Jan 17 12:23:46 2001
From: jf15@hermes.cam.ac.uk (Jon Fairbairn)
Date: Wed, 17 Jan 2001 12:23:46 +0000 (GMT)
Subject: Learning Haskell and FP
In-Reply-To:
Message-ID:
On Tue, 16 Jan 2001, Fritz Ruehr wrote:
> Erik Meijer said:
>=20
> > Not many other imperative languages have statements as first class citi=
zens.
>=20
> I don't have the details here (e.g., an Algol 68 report), but Michael Sco=
tt
> reports in his "Programming Language Pragmatics" text (p. 279) that:
>=20
> "Algol 68 [allows], in essence, indexing into an array of statements,
> but the syntax is rather cumbersome."
Well, there are two ways it allows this.
1) The case statement is little more than array indexing
case
in ,
,
...
out
esac
2) You can create an array of procedures returning void
results, for which if I remember correctly you have to write
VOID:
to turn the into a proc void. You can certainly
index an array of these and the relevant proc will be called
as soon as the indexing happens (you don't need to write
() or anything).
So (VOID: print ("a"), VOID: print ("b"))[2] would print
"b". I can't remember if you need to specify the type of the
array, though.
The statements aren't first class, though, because their
scope is restricted by the scope of variables that they
reference. So
begin [10] proc void x; # declare an array of procs #
begin int n :=3D 42;
x[1] :=3D void: (print (n))
end;
x[1]
end
is invalid because at x[1] it would call the procedure,
which would refer to n, which is out of scope (and quite
possibly because of sundry syntax errors!). So Algol 68
isn't a counterexample to Erik's claim.
> This is in reference to historical variations on switch-like statements (=
and
> consistent with a running theme, typical in PL texts, about the extremes =
of
> orthogonality found in Algol 68).
Although if they'd really wanted to be extreme they could
have left out integer case clauses, because they are the
same as indexing a [] proc void!
J=F3n
--=20
J=F3n Fairbairn Jon.Fairbairn@cl.cam.ac.uk
31 Chalmers Road jf@cl.cam.ac.uk
Cambridge CB1 3SZ +44 1223 570179 (pm only, please)
From uk1o@rz.uni-karlsruhe.de Wed Jan 17 23:35:28 2001
From: uk1o@rz.uni-karlsruhe.de (Hannah Schroeter)
Date: Thu, 18 Jan 2001 00:35:28 +0100
Subject: Will Haskell be commercialized in the future?
In-Reply-To: ; from mg169780@students.mimuw.edu.pl on Mon, Nov 27, 2000 at 08:53:05PM +0100
References: <3233BEE02CB3D4118DBA00A0C99869401D60BB@hermes.pml.com>
Message-ID: <20010118003527.A11618@rz.uni-karlsruhe.de>
Hello!
[a somewhat older mail]
On Mon, Nov 27, 2000 at 08:53:05PM +0100, Michal Gajda wrote:
> [...]
> I often use Haskell in imperative style(for example writting a toy
> [...]
I also do. Perhaps not super-often, but more than once.
- A program to interface to BSD's /dev/tun* and/or /dev/bpf* to
simulate network links with losses/delays (configurable).
~150 lines of C + ~ 1700 lines of Haskell with some GHC extensions.
One efficiency optimization was that I used network buffers
constructed out of MutableByteArray#s together with some
administrative information (an offset into the MBA to designate
the real packet start - necessary to leave room for in-place header
prefixing / removal, the real packet length, the buffer length ...).
Written single threaded with manual select calls (no conc Haskell).
- HTTP testers (two of them with slightly different tasks).
- file generators/translators (diverse, e.g. a generator for test
scripts for one of the above-named HTTP tester)
Of course, the latter often are something like
foo <- readFile bar
let baz = process_somehow foo
writeFile blurb baz
Kind regards,
Hannah.
From uk1o@rz.uni-karlsruhe.de Thu Jan 18 00:00:18 2001
From: uk1o@rz.uni-karlsruhe.de (Hannah Schroeter)
Date: Thu, 18 Jan 2001 01:00:18 +0100
Subject: GHCi, Hugs (was: Mixture ...)
In-Reply-To: <74096918BE6FD94B9068105F877C002D01378329@red-pt-02.redmond.corp.microsoft.com>; from simonpj@microsoft.com on Mon, Dec 18, 2000 at 02:07:43AM -0800
References: <74096918BE6FD94B9068105F877C002D01378329@red-pt-02.redmond.corp.microsoft.com>
Message-ID: <20010118010018.C11618@rz.uni-karlsruhe.de>
Hello!
On Mon, Dec 18, 2000 at 02:07:43AM -0800, Simon Peyton-Jones wrote:
> | It'd be good if left-behind tools were using a BSD(-like) licence, in
> | the event that anybody - commercial or otherwise - who wanted to pick
> | them up again, were free to do so.
> GHC does indeed have a BSD-like license.
Except e.g. for the gmp included in the source and the runtime system.
So more precisely: The non-third-party part of GHC has a BSD-like license,
together with *some* third-party parts, maybe.
Kind regards,
Hannah.
From nordland@cse.ogi.edu Thu Jan 18 00:07:14 2001
From: nordland@cse.ogi.edu (Johan Nordlander)
Date: Wed, 17 Jan 2001 16:07:14 -0800
Subject: O'Haskell OOP Polymorphic Functions
References: <200101172348.PAA28747@mail4.halcyon.com>
Message-ID: <3A6633B2.696CA44E@cse.ogi.edu>
Ashley Yakeley wrote:
>
> OK, I've figured it out. In this O'Haskell statement,
>
> > struct Derived < Base =
> > value :: Int
>
> ...Derived is not, in fact, a subtype of Base. Derived and Base are
> disjoint types, but an implicit map of type "Derived -> Base" has been
> defined.
>
> --
> Ashley Yakeley, Seattle WA
Well, they are actually subtypes, as far as no implicit mapping needs to be
defined. But since Derived and Base also are two distinct type constructors,
the overloading system treats them as completely unrelated types (which is fine,
in general). To this the upcoming O'Hugs release adds the capability of using
an instance defined for a (unique, smallest) supertype of the inferred type, in
case an instance for the inferred type is missing. This all makes a system that
is very similar to the overlapping instances extension that Tom mentioned.
-- Johan
From uk1o@rz.uni-karlsruhe.de Thu Jan 18 00:10:54 2001
From: uk1o@rz.uni-karlsruhe.de (Hannah Schroeter)
Date: Thu, 18 Jan 2001 01:10:54 +0100
Subject: Boolean Primes Map (continued)
In-Reply-To: ; from shlomif@vipe.technion.ac.il on Fri, Dec 22, 2000 at 05:58:56AM +0200
References:
Message-ID: <20010118011054.D11618@rz.uni-karlsruhe.de>
Hello!
On Fri, Dec 22, 2000 at 05:58:56AM +0200, Shlomi Fish wrote:
> primes :: Int -> [Int]
> primes how_much = (iterate 2 initial_map) where
> initial_map :: [Bool]
> initial_map = (map (\x -> True) [ 0 .. how_much])
> iterate :: Int -> [Bool] -> [Int]
> iterate p (a:as) | p > mybound = process_map p (a:as)
> | a = p:(iterate (p+1) (mymark (p+1) step (2*p) as))
> | (not a) = (iterate (p+1) as) where
> step :: Int
> step = if p == 2 then p else 2*p
> mymark :: Int -> Int -> Int -> [Bool] -> [Bool]
> mymark cur_pos step next_pos [] = []
> mymark cur_pos step next_pos (a:as) =
> if (cur_pos == next_pos) then
> False:(mymark (cur_pos+1) step (cur_pos+step) as)
> else
> a:(mymark (cur_pos+1) step next_pos as)
> mybound :: Int
> mybound = ceiling(sqrt(fromIntegral(how_much)))
> process_map :: Int -> [Bool] -> [Int]
> process_map cur_pos [] = []
> process_map cur_pos (a:as) | a = cur_pos:(process_map (cur_pos+1) as)
> | (not a) = (process_map (cur_pos+1) as)
This is buggy.
hannah@mamba:~/src/haskell $ ./primes3 100
[2,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101]
51 primes found.
Correct result is:
hannah@mamba:~/src/haskell $ ./primes0 100
[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97]
25 primes found.
And it's much slower than your previous, correct variant as well as
my just-mailed variant.
Kind regards,
Hannah.
From ashley@semantic.org Thu Jan 18 00:37:01 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Wed, 17 Jan 2001 16:37:01 -0800
Subject: O'Haskell OOP Polymorphic Functions
Message-ID: <200101180037.QAA04970@mail4.halcyon.com>
At 2001-01-17 16:07, Johan Nordlander wrote:
>Ashley Yakeley wrote:
>>
>> OK, I've figured it out. In this O'Haskell statement,
>>
>> > struct Derived < Base =
>> > value :: Int
>>
>> ...Derived is not, in fact, a subtype of Base. Derived and Base are
>> disjoint types, but an implicit map of type "Derived -> Base" has been
>> defined.
>>
>> --
>> Ashley Yakeley, Seattle WA
>
>Well, they are actually subtypes, as far as no implicit mapping needs to be
>defined. But since Derived and Base also are two distinct type constructors,
>the overloading system treats them as completely unrelated types (which is
>fine, in general).
All O'Haskell treats them as completely unrelated types. In fact, this
O'Haskell...
> struct Base =
> b1 :: Int
> b2 :: Char
>
> struct Derived < Base =
> d1 :: Int
...is a kind of syntactic sugar for this Haskell...
> data Base = Base (Int,Char)
> dotb1 (Base (x,_)) = x
> dotb2 (Base (_,x)) = x
>
> data Derived = Derived (Int,Char,Int)
> dotd1 (Derived (_,_,x)) = x
>
> implicitMap (Derived (b1,b2,d1)) = Base (b1,b2)
This seems to be stretching the concept of 'subtype'.
Sorry if I sound so bitter and disappointed. I was hoping for a Haskell
extended with real dynamic subtyping...
--
Ashley Yakeley, Seattle WA
From lennart@mail.augustsson.net Thu Jan 18 01:03:14 2001
From: lennart@mail.augustsson.net (Lennart Augustsson)
Date: Wed, 17 Jan 2001 20:03:14 -0500
Subject: O'Haskell OOP Polymorphic Functions
References: <200101180037.QAA04970@mail4.halcyon.com>
Message-ID: <3A6640D2.41D12D7C@mail.augustsson.net>
Ashley Yakeley wrote:
> This seems to be stretching the concept of 'subtype'.
I don't think so, this is the essence of subtyping.
> Sorry if I sound so bitter and disappointed. I was hoping for a Haskell
> extended with real dynamic subtyping...
You seem to want dynamic type tests. This is another feature, and
sometimes a useful one. But it requires carrying around types at
runtime.
You might want to look at existential types; it is a similar feature.
-- Lennart
From wli@holomorphy.com Thu Jan 18 23:57:46 2001
From: wli@holomorphy.com (William Lee Irwin III)
Date: Thu, 18 Jan 2001 15:57:46 -0800
Subject: A simple problem
In-Reply-To: <200101182338.PAA16529@mail4.halcyon.com>; from ashley@semantic.org on Thu, Jan 18, 2001 at 03:38:10PM -0800
References: <200101182338.PAA16529@mail4.halcyon.com>
Message-ID: <20010118155746.J10018@holomorphy.com>
Moving over to haskell-cafe...
At 2001-01-18 05:16, Saswat Anand wrote:
>> fun 3 --gives error in Hugs
>> fun (3::Integer) -- OK
>>
>> I am a building an embedded language, so don't want user to cast. Is
>> there a solution?
On Thu, Jan 18, 2001 at 03:38:10PM -0800, Ashley Yakeley wrote:
> 3 is not always an Integer. It's of type "(Num a) => a".
> I couldn't find a way to say that every Num is a C.
I didn't try to say every Num is C, but I found a way to make his
example work:
class C a where
fun :: a -> Integer
instance Integral a => C a where
fun = toInteger . succ
One has no trouble whatsoever with evaluating fun 3 with this instance
defined instead of the original. I'm not sure as to the details, as I'm
fuzzy on the typing derivations making heavy use of qualified types. Is
this the monomorphism restriction biting us again?
Cheers,
Bill
--
how does one decide if something is undecidable?
carefully
--
From ashley@semantic.org Fri Jan 19 00:04:01 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Thu, 18 Jan 2001 16:04:01 -0800
Subject: A simple problem
Message-ID: <200101190004.QAA19966@mail4.halcyon.com>
At 2001-01-18 15:57, William Lee Irwin III wrote:
>class C a where
> fun :: a -> Integer
>
>instance Integral a => C a where
> fun = toInteger . succ
Gives "syntax error in instance head (constructor expected)" at the
'instance' line in Hugs. Is there a option I need to turn on or something?
--
Ashley Yakeley, Seattle WA
From wli@holomorphy.com Fri Jan 19 00:07:39 2001
From: wli@holomorphy.com (William Lee Irwin III)
Date: Thu, 18 Jan 2001 16:07:39 -0800
Subject: A simple problem
In-Reply-To: <200101190004.QAA19966@mail4.halcyon.com>; from ashley@semantic.org on Thu, Jan 18, 2001 at 04:04:01PM -0800
References: <200101190004.QAA19966@mail4.halcyon.com>
Message-ID: <20010118160739.K10018@holomorphy.com>
At 2001-01-18 15:57, William Lee Irwin III wrote:
>>class C a where
>> fun :: a -> Integer
>>
>>instance Integral a => C a where
>> fun = toInteger . succ
>
On Thu, Jan 18, 2001 at 04:04:01PM -0800, Ashley Yakeley wrote:
> Gives "syntax error in instance head (constructor expected)" at the
> 'instance' line in Hugs. Is there a option I need to turn on or something?
Yes, when hugs is invoked, invoke it with the -98 option to turn off the
strict Haskell 98 compliance.
Cheers,
Bill
From Tom.Pledger@peace.com Fri Jan 19 00:34:04 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Fri, 19 Jan 2001 13:34:04 +1300
Subject: A simple problem
In-Reply-To: <200101190008.QAA20452@mail4.halcyon.com>
References: <200101190008.QAA20452@mail4.halcyon.com>
Message-ID: <14951.35708.370567.748350@waytogo.peace.co.nz>
Ashley Yakeley writes:
> At 2001-01-18 15:38, I wrote:
>
> >3 is not always an Integer. It's of type "(Num a) => a".
>
> Of course, it would be nice if 3 were an Integer, and Integer were a
> subtype of Real. I haven't come across a language that does this, where
> for instance 3.0 can be cast to Integer (because it is one) but 3.1
> cannot be.
A cast in that direction - becoming more specific - would be nicely
typed as:
Real -> Maybe Integer
or with the help of the fail and return methods:
Monad m => Real -> m Integer
From simonmar@microsoft.com Wed Jan 17 10:39:53 2001
From: simonmar@microsoft.com (Simon Marlow)
Date: Wed, 17 Jan 2001 02:39:53 -0800
Subject: {-# LINE 100 "Foo.hs #-} vs. # 100 "Foo.hs"
Message-ID: <9584A4A864BD8548932F2F88EB30D1C61157AF@TVP-MSG-01.europe.corp.microsoft.com>
> Indeed. Or do you want to tell me that you are going to
> break one of my favourite programs?
>=20
> [ code deleted ]
> # 111 "Foo.hs"
Actually the cpp-style pragma is only recognised if the '#' is in the
leftmost column and is followed by optional spaces and a digit. It's
quite hard to write one of these in a legal Haskell program, but not
impossible.
Simon
From qrczak@knm.org.pl Fri Jan 19 07:54:34 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 19 Jan 2001 07:54:34 GMT
Subject: {-# LINE 100 "Foo.hs #-} vs. # 100 "Foo.hs"
References: <9584A4A864BD8548932F2F88EB30D1C61157AF@TVP-MSG-01.europe.corp.microsoft.com>
Message-ID:
Wed, 17 Jan 2001 02:39:53 -0800, Simon Marlow pisze:
> Actually the cpp-style pragma is only recognised if the '#' is in the
> leftmost column and is followed by optional spaces and a digit. It's
> quite hard to write one of these in a legal Haskell program, but not
> impossible.
It's enough to change Manuel's program to use {;} instead of layout.
But it won't happen in any real life program.
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From chak@cse.unsw.edu.au Fri Jan 19 10:45:55 2001
From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty)
Date: Fri, 19 Jan 2001 10:45:55 GMT
Subject: {-# LINE 100 "Foo.hs #-} vs. # 100 "Foo.hs"
In-Reply-To:
References: <9584A4A864BD8548932F2F88EB30D1C61157AF@TVP-MSG-01.europe.corp.microsoft.com>
Message-ID: <20010119104555I.chak@cse.unsw.edu.au>
qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) wrote,
> Wed, 17 Jan 2001 02:39:53 -0800, Simon Marlow pisze:
>
> > Actually the cpp-style pragma is only recognised if the '#' is in the
> > leftmost column and is followed by optional spaces and a digit. It's
> > quite hard to write one of these in a legal Haskell program, but not
> > impossible.
>
> It's enough to change Manuel's program to use {;} instead of layout.
> But it won't happen in any real life program.
A dangerous statement. For example, automatically generated
code often contains quite strange code. If something breaks
the standard, it breaks the standard.
Cheers,
Manuel
From tweed@compsci.bristol.ac.uk Tue Jan 23 18:23:18 2001
From: tweed@compsci.bristol.ac.uk (D. Tweed)
Date: Tue, 23 Jan 2001 18:23:18 +0000 (GMT)
Subject: Specifications of 'any', 'all', 'findIndices'
In-Reply-To: <3A6DC955.1DCC97A7@yale.edu>
Message-ID:
On Tue, 23 Jan 2001, Mark Tullsen wrote:
> Johannes Waldmann wrote:
> > ...
> > I'd rather write clear code, than worry about efficiency too early.
> > Who said this, "premature optimization is the root of all evil".
>
> I've always attributed this to Donald Knuth:
>
> Premature optimization is the root of all evil in programming.
In his paper on the errors of TeX (no proper ref but it's reprinted in his
book on Literate Programming) he calls it Hoare's dictum (i.e. Tony
Hoare) although the context suggests that this isn't an `official
name'. Dunno if Hoare heard it from someone else though...
___cheers,_dave________________________________________________________
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law: however many computers
email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel:
(0117) 954-5250 | waiting for compilations to finish.
From diatchki@cse.ogi.edu Tue Jan 23 20:16:05 2001
From: diatchki@cse.ogi.edu (Iavor Diatchki)
Date: Tue, 23 Jan 2001 12:16:05 -0800
Subject: 'any' and 'all' compared with the rest of the Report
In-Reply-To: <20010123123519.A11539@gandalf.smsu.edu>; from Eric Shade on Tue, Jan 23, 2001 at 12:35:19PM -0600
References: <20010123123519.A11539@gandalf.smsu.edu>
Message-ID: <20010123121605.02765@church.cse.ogi.edu>
hello
i myself am not an experienced Haskell user, so please correct me
if i am wrong. it is difficult in general to reason about the
performance of lazy programs, so i don't think one can assume
much. in particular i dont think 'any' and 'all' will
perform in linear space. here is why i think so:
take an example when 'any' is applied to some list (x:xs)
and someone is actually interested in the result:
any p (x:xs)
1. -> (or . map p) (x:xs)
2. -> or (map p (x:xs))
3. -> foldr (||) False (map p (x:xs))
[at this stage we need to know what what kind of list is (map p (x:xs))
i.e. empty or a cons thing, so we need to do some evaluation]
4. -> foldr (||) False (p x : map p xs)
5. -> p x || (foldr (||) False (map p xs))
[at this stage we need to know what kind of thing is p x, i.e.
True or False, so we need to evaluate p x]
6. -> if p x is True we are done (result True)
7. -> if p x is False the result is (foldr (||) False (map p xs))
and we go back to 3. note that p x has become garbage and so it
doesnt really take up any space, so one really needs only enough
space to process the list one element at a time.
what causes problems is the need to create unnecessary cons cells
(i.e. the evaluation after 3.). this is bad because it takes time.
of course it only adds a constant so the complexity is the same
but in practise programs run slower. this is where i would expect
a good compiler to do some optimisation, i.e to remove the need
for the intermediate list.
i like the idea of programming at a higher level, as i believe it
produces "better structured" programs. what i mean is that one
manages to capture certain aspects of a program, which would
be obscured if one always used explicit recursion. i think
higher order functions like maps and folds really go a long way
toward structuring functional programs. in a way this is simillar
to using loops instead of explicit gotos in procedural programs.
anyways these are my deep thought on the matter :)
bye
iavor
On Tue, Jan 23, 2001 at 12:35:19PM -0600, Eric Shade wrote:
...
>
> Then I get to 'any' and 'all', whose specification requires linear
> time and *linear* space when it should run in constant space. (By the
> way, I checked and GHC does *not* use the Prelude definitions but
> rather the obvious recursive ones, and most of its optimizations based
> on "good consumers/producers" use meta-linguistic rewrite rules. So
> without knowing the specific optimizations that a compiler provides, I
> think it's safe to assume that the Prelude 'any' and 'all' *will*
> require linear space.)
...
--
+---------------------------------+---------------------------------------+
|Iavor S. Diatchki | email: diatchki@cse.ogi.edu |
|Dept. of Computer Science | web: http://www.cse.ogi.edu/~diatchki |
|Oregon Graduate Institute | tel: 5037481631 |
+---------------------------------+---------------------------------------+
From Sven.Panne@informatik.uni-muenchen.de Tue Jan 23 22:10:53 2001
From: Sven.Panne@informatik.uni-muenchen.de (Sven Panne)
Date: Tue, 23 Jan 2001 23:10:53 +0100
Subject: 'any' and 'all' compared with the rest of the Report
References: <20010123123519.A11539@gandalf.smsu.edu> <20010123121605.02765@church.cse.ogi.edu>
Message-ID: <3A6E016D.9BB3F6E9@informatik.uni-muenchen.de>
Iavor Diatchki wrote:
> [...] but in practise programs run slower.
If "practise" = "simple interpreter", yes. But...
> this is where i would expect a good compiler to do some optimisation,
> i.e to remove the need for the intermediate list.
Given
or = foldr (||) False
any p = or . map p
ghc -O generates basically the following code (use -ddump-simpl to
see this):
any :: (a -> Bool) -> [a] -> Bool
any p xs = let go ys = case ys of
(z:zs) -> case p z of
False -> go zs
True -> True
in go xs
This is exactly the recursive version, without any intermediate list,
but I hardly think that anybody recognizes this with a quick glance
only.
> i like the idea of programming at a higher level, as i believe it
> produces "better structured" programs. what i mean is that one
> manages to capture certain aspects of a program, which would
> be obscured if one always used explicit recursion. i think
> higher order functions like maps and folds really go a long way
> toward structuring functional programs. in a way this is simillar
> to using loops instead of explicit gotos in procedural programs.
> anyways these are my deep thought on the matter :)
IMHO you should write any kind of recursion over a given data structure
at most once (in a higher order function). (Constructor) classes and
generics are a further step in this direction. It vastly improves
readability (after you get used to it :-) and often there is no
performance
hit at all.
Cheers,
Sven
From uk1o@rz.uni-karlsruhe.de Tue Jan 23 22:25:16 2001
From: uk1o@rz.uni-karlsruhe.de (Hannah Schroeter)
Date: Tue, 23 Jan 2001 23:25:16 +0100
Subject: 'any' and 'all' compared with the rest of the Report
In-Reply-To: <3A6E016D.9BB3F6E9@informatik.uni-muenchen.de>; from Sven.Panne@informatik.uni-muenchen.de on Tue, Jan 23, 2001 at 11:10:53PM +0100
References: <20010123123519.A11539@gandalf.smsu.edu> <20010123121605.02765@church.cse.ogi.edu> <3A6E016D.9BB3F6E9@informatik.uni-muenchen.de>
Message-ID: <20010123232516.D21129@rz.uni-karlsruhe.de>
Hello!
On Tue, Jan 23, 2001 at 11:10:53PM +0100, Sven Panne wrote:
> [...]
>
> Given
>
> or = foldr (||) False
> any p = or . map p
> ghc -O generates basically the following code (use -ddump-simpl to
> see this):
> any :: (a -> Bool) -> [a] -> Bool
> any p xs = let go ys = case ys of
> (z:zs) -> case p z of
> False -> go zs
> True -> True
> in go xs
Mental note: I should really upgrade GHC. I'm however a bit afraid about
ghc-current, as I'm on a non-ELF arch.
> [...]
Kind regards,
Hannah.
From Sven.Panne@informatik.uni-muenchen.de Tue Jan 23 22:45:12 2001
From: Sven.Panne@informatik.uni-muenchen.de (Sven Panne)
Date: Tue, 23 Jan 2001 23:45:12 +0100
Subject: 'any' and 'all' compared with the rest of the Report
References: <20010123123519.A11539@gandalf.smsu.edu> <20010123121605.02765@church.cse.ogi.edu> <3A6E016D.9BB3F6E9@informatik.uni-muenchen.de>
Message-ID: <3A6E0978.1A2831D6@informatik.uni-muenchen.de>
I wrote:
> [...]
> ghc -O generates basically the following code (use -ddump-simpl to
> see this):
>
> any :: (a -> Bool) -> [a] -> Bool
> any p xs = let go ys = case ys of
Ooops, cut'n'paste error: Insert
[] -> False
here. :-}
> (z:zs) -> case p z of
> False -> go zs
> True -> True
> in go xs
> [...]
Cheers,
Sven
From jmaessen@mit.edu Wed Jan 24 19:25:12 2001
From: jmaessen@mit.edu (Jan-Willem Maessen)
Date: Wed, 24 Jan 2001 14:25:12 -0500
Subject: 'any' and 'all' compared with the rest of the Report
Message-ID: <200101241925.OAA00754@lauzeta.mit.edu>
(Note, I've kept this on Haskell-cafe)
Bjorn Lisper wrote (several messages back):
> ... What I in turn would like to add is that specifications like
>
> any p = or . map p
>
> are on a higher level of abstraction
...
> The first specification is, for instance, directly data parallel
> which facilitates an implementation on a parallel machine or in hardware.
As was pointed out in later discussion, the specification of "or" uses
foldr and is thus not amenable to data parallelism.
On which subject Jerzy Karczmarczuk later noted:
> On the other hand, having an even more generic logarithmic iterator
> for associative operations seems to me a decent idea.
In my master's thesis, "Eliminating Intermediate Lists in pH using
Local Transformations", I proposed doing just that, and called the
resulting operator "reduce" in line with Bird's list calculus. I then
gave rules for performing deforestation so that sublists could be
traversed in parallel. The compiler can choose from many possible
definitions of reduce, including:
reduce = foldr
reduce f z = foldl (flip f) z
reduce = divideAndConquer
(most of the complexity of the analysis focuses on making this choice
intelligently; the rest could be re-stated RULES-style just as the GHC
folks have done with foldr/build.) We've been using this stuff for
around 6-7 years now.
Fundamentally, many associative operators have a "handedness" which we
disregard at our peril. For example, defining "or" in terms of foldl
as above is very nearly correct until handed an infinite list, but it
performs really badly regardless of the input. Thus, we need a richer
set of primitives (and a lot more complexity) if we really want to
capture "what is most commonly a good idea" as opposed to "what is
safe". For example, our deforestation pass handles "concat" as part
of its internal representation, since no explicit definition using
reduction is satisfactory:
concat = foldr (++) [] -- least work, no parallelism
concat = reduce (++) [] -- parallelizes well, but could pessimize
-- the program
For this reason, it's hard to use "reduce" directly in more than a
handful of prelude functions. We use reduce a lot, but only after
uglification of the prelude definitions to convince the compiler to
"guess right" about the appropriate handedness.
There is also the problem of error behavior; if we unfold an "or"
eagerly to exploit its data parallelism we may uncover errors, even if
it's finite:
> or [True, error "This should never be evaluated"]
True
My current work includes [among other things] ways to eliminate this
problem---that is, we may do a computation eagerly and defer or
discard any errors.
-Jan-Willem Maessen
My Master's Thesis is at:
ftp://csg-ftp.lcs.mit.edu/pub/papers/theses/maessen-sm.ps.gz
A shorter version (with some refinements) is found in:
ftp://csg-ftp.lcs.mit.edu/pub/papers/csgmemo/memo-370.ps.gz
From lisper@it.kth.se Thu Jan 25 09:40:49 2001
From: lisper@it.kth.se (Bjorn Lisper)
Date: Thu, 25 Jan 2001 10:40:49 +0100 (MET)
Subject: 'any' and 'all' compared with the rest of the Report
In-Reply-To: <200101241925.OAA00754@lauzeta.mit.edu> (message from Jan-Willem
Maessen on Wed, 24 Jan 2001 14:25:12 -0500)
References: <200101241925.OAA00754@lauzeta.mit.edu>
Message-ID: <200101250940.KAA10165@dentex.it.kth.se>
>There is also the problem of error behavior; if we unfold an "or"
>eagerly to exploit its data parallelism we may uncover errors, even if
>it's finite:
>> or [True, error "This should never be evaluated"]
>True
>My current work includes [among other things] ways to eliminate this
>problem---that is, we may do a computation eagerly and defer or
>discard any errors.
What you basically have to do is to treat purely data-dependent errors (like
division by zero, or indexing an array out of bounds) as values rather than
events. Then you can decide whether to raise the error or discard it,
depending on whether the error value turned out to be needed or not.
You will have to extend the operators of the language to deal also with
error values. Basically, the error values should have algebraic properties
similar to bottom (so strict functions return error given an error as
argument). Beware that some decisions have to be taken regarding how error
values should interact with bottom. (For instance, should we have
error + bottom = error or error + bottom = bottom?) The choice affects which
evaluation strategies will be possible.
Björn Lisper
From jmaessen@mit.edu Thu Jan 25 16:09:07 2001
From: jmaessen@mit.edu (Jan-Willem Maessen)
Date: Thu, 25 Jan 2001 11:09:07 -0500
Subject: 'any' and 'all' compared with the rest of the Report
Message-ID: <200101251609.LAA00584@lauzeta.mit.edu>
Bjorn Lisper replies to my reply:
> >My current work includes [among other things] ways to eliminate this
> >problem---that is, we may do a computation eagerly and defer or
> >discard any errors.
>
> What you basically have to do is to treat purely data-dependent errors (like
> division by zero, or indexing an array out of bounds) as values rather than
> events.
Indeed. We can have a class of deferred exception values similar to
IEEE NaNs.
[later]:
> Beware that some decisions have to be taken regarding how error
> values should interact with bottom. (For instance, should we have
> error + bottom = error or error + bottom = bottom?) The choice affects which
> evaluation strategies will be possible.
Actually, as far as I can tell we have absolute freedom in this
respect. What happens when you run the following little program?
\begin{code}
forever x = forever x
bottomInt :: Int
bottomInt = error "Evaluating bottom is naughty" + forever ()
main = print bottomInt
\end{code}
I don't know of anything in the Haskell language spec that forces us
to choose whether to signal the error or diverge in this case (though
it's clear we must do one or the other). Putting strong constraints
on evaluation order would cripple a lot of the worker/wrapper-style
optimizations that (eg) GHC users depend on for fast code. We want
the freedom to demand strict arguments as early as possible; the
consequence is we treat all bottoms equally, even if they exhibit
different behavior in practice. This simplification is a price of
"clean equational semantics", and one I'm more than willing to pay.
-Jan-Willem Maessen
jmaessen@mit.edu
[PS - I've deliberately dodged the issue of program-level exception
handling mechanisms and the like.]
From marko@ki.informatik.uni-frankfurt.de Fri Jan 26 12:59:02 2001
From: marko@ki.informatik.uni-frankfurt.de (Marko Schuetz)
Date: Fri, 26 Jan 2001 13:59:02 +0100
Subject: 'any' and 'all' compared with the rest of the Report
In-Reply-To: Your message of "Thu, 25 Jan 2001 11:09:07 -0500"
<200101251609.LAA00584@lauzeta.mit.edu>
References: <200101251609.LAA00584@lauzeta.mit.edu>
Message-ID: <20010126135902I.marko@kinetic.ki.informatik.uni-frankfurt.de>
From: Jan-Willem Maessen
Subject: Re: 'any' and 'all' compared with the rest of the Report
Date: Thu, 25 Jan 2001 11:09:07 -0500
> Bjorn Lisper replies to my reply:
> > >My current work includes [among other things] ways to eliminate this
> > >problem---that is, we may do a computation eagerly and defer or
> > >discard any errors.
> >
> > What you basically have to do is to treat purely data-dependent errors (like
> > division by zero, or indexing an array out of bounds) as values rather than
> > events.
>
> Indeed. We can have a class of deferred exception values similar to
> IEEE NaNs.
>
> [later]:
> > Beware that some decisions have to be taken regarding how error
> > values should interact with bottom. (For instance, should we have
> > error + bottom = error or error + bottom = bottom?) The choice affects which
> > evaluation strategies will be possible.
>
> Actually, as far as I can tell we have absolute freedom in this
> respect. What happens when you run the following little program?
I don't think we have absolute freedom. Assuming we want
\forall s : bottom \le s
including s = error, then we should also have error \not\le
bottom. For all other values s \not\equiv bottom we would want error
\le s.
. . . . . . . . .
\ /
\ /
.
.
.
|
error
|
bottom
Now if f is a strict function returning a strict function then
(f bottom) error \equiv bottom error \equiv bottom
and due to f's strictness either f error \equiv bottom or f error
\equiv error. The former is as above. For the latter (assuming
monotonicity) we have error \le 1 \implies f error \le f 1 and thus
(f error) bottom \le (f 1) bottom \equiv bottom
On the other hand, if error and other data values are incomparable.
. . . . . error
\ /
\ /
.
.
|
bottom
and you want, say, error + bottom \equiv error then + can no longer be
strict in its second argument....
So I'd say error + bottom \equiv bottom and bottom + error \equiv
bottom.
>
> \begin{code}
> forever x = forever x
>
> bottomInt :: Int
> bottomInt = error "Evaluating bottom is naughty" + forever ()
>
> main = print bottomInt
> \end{code}
>
> I don't know of anything in the Haskell language spec that forces us
> to choose whether to signal the error or diverge in this case (though
> it's clear we must do one or the other). Putting strong constraints
> on evaluation order would cripple a lot of the worker/wrapper-style
> optimizations that (eg) GHC users depend on for fast code. We want
> the freedom to demand strict arguments as early as possible; the
> consequence is we treat all bottoms equally, even if they exhibit
> different behavior in practice. This simplification is a price of
> "clean equational semantics", and one I'm more than willing to pay.
If error \equiv bottom and you extend, say, Int with NaNs, how do you
implement arithmetic such that Infinity + Infinity \equiv Infinity and
Infinity/Infinity \equiv Invalid Operation?
Marko
From jmaessen@mit.edu Fri Jan 26 17:26:17 2001
From: jmaessen@mit.edu (Jan-Willem Maessen)
Date: Fri, 26 Jan 2001 12:26:17 -0500
Subject: 'any' and 'all' compared with the rest of the Report
Message-ID: <200101261726.MAA00618@lauzeta.mit.edu>
Marko Schuetz starts an explanation of bottom
vs. error with an assumption which I think is dangerous:
> Assuming we want
>
> \forall s : bottom \le s
>
> including s = error, then we should also have error \not\le
> bottom.
He uses this, and an argument based on currying, to show that strict
functions ought to force their arguments left to right.
This seems to me to be another instance of an age-old debate in
programming language design/semantics: If we can in practice observe
evaluation order, should we therefore specify that evaluation order?
This is a debate that's raged for quite a while among users of
imperative languages. Some languages (Scheme comes to mind) very
clearly state that this behavior is left unspecified.
Fortunately, for the Haskell programmer the debate is considerably
simpler, as only the behavior of "wrong" programs is affected. I am
more than willing to be a little ambiguous about the error behavior of
programs, by considering "bottom" and "error" to be one and the same.
As I noted in my last message, this allows some optimizations which
would otherwise not be allowed. Here's the worker-wrapper
optimization at work; I'll use explicit unboxing to make the
evaluation clear.
forever x = forever x -- throughout.
addToForever :: Int -> Int
addToForever b = forever () + b
main = print (addToForever (error "Bottom is naughty!"))
==
-- expanding the definition of +
addToForever b =
case forever () of
I# a# ->
case b of
I# b# -> a# +# b#
==
-- At this point strictness analysis reveals that addToForever
-- is strict in its argument. As a result, we perform the worker-
-- wrapper transformation:
addToForever b =
case b of
I# b# -> addToForever_worker b#
addToForever_worker b# =
let b = I# b#
in case forever () of
I# a# ->
case b of
I# b# -> a# +# b#
==
-- The semantics have changed---b will now be evaluated before
-- forever().
I've experimented with versions of Haskell where order of evaluation
did matter. It was a giant albatross around our neck---there is
simply no way to cleanly optimize programs in such a setting without
doing things like termination analysis to justify the nice old
transformations. If you rely on precise results from an analysis to
enable transformation, you all-too-frequently miss obvious
opportunities. For this reason I *very explicitly* chose not to make
such distinctions in my work.
In general, making too many semantic distinctions weakens the power of
algebraic semantics. Two levels---bottom and everything else---seems
about the limit of acceptable complexity. If you look at the work on
free theorems you quickly discover that even having bottom in the
language makes life a good deal more difficult, and really we'd like
to have completely flat domains.
I'd go as far as saying that it also gives us some prayer of
explaining our algebraic semantics to the programmer. A complex
algebra becomes too bulky to reason about when things act stragely.
Bottom is making things hard enough.
-Jan-Willem Maessen
PS - Again, we don't try to recover from errors. This is where the
comparison with IEEE arithmetic breaks down: NaNs are specifically
designed so you can _test_ for them and take action. I'll also point
out that infinities are _not_ exceptional values; they're semantically
"at the same level" as regular floats---so the following comparison is
a bit disingenuous:
> If error \equiv bottom and you extend, say, Int with NaNs, how do you
> implement arithmetic such that Infinity + Infinity \equiv Infinity and
> Infinity/Infinity \equiv Invalid Operation?
From marko@ki.informatik.uni-frankfurt.de Fri Jan 26 19:00:51 2001
From: marko@ki.informatik.uni-frankfurt.de (Marko Schuetz)
Date: Fri, 26 Jan 2001 20:00:51 +0100
Subject: 'any' and 'all' compared with the rest of the Report
In-Reply-To: Your message of "Fri, 26 Jan 2001 12:26:17 -0500"
<200101261726.MAA00618@lauzeta.mit.edu>
References: <200101261726.MAA00618@lauzeta.mit.edu>
Message-ID: <20010126200051O.marko@kinetic.ki.informatik.uni-frankfurt.de>
From: Jan-Willem Maessen
Subject: Re: 'any' and 'all' compared with the rest of the Report
Date: Fri, 26 Jan 2001 12:26:17 -0500
> Marko Schuetz starts an explanation of bottom
> vs. error with an assumption which I think is dangerous:
> > Assuming we want
> >
> > \forall s : bottom \le s
> >
> > including s = error, then we should also have error \not\le
> > bottom.
>
> He uses this, and an argument based on currying, to show that strict
> functions ought to force their arguments left to right.
I can't see where I did. I argued that distinguishing between error
and bottom seems to not leave much choice for bottom + error.
[..]
> forever x = forever x -- throughout.
>
> addToForever :: Int -> Int
> addToForever b = forever () + b
>
> main = print (addToForever (error "Bottom is naughty!"))
>
> ==
> -- expanding the definition of +
>
> addToForever b =
> case forever () of
> I# a# ->
> case b of
> I# b# -> a# +# b#
>
> ==
> -- At this point strictness analysis reveals that addToForever
> -- is strict in its argument. As a result, we perform the worker-
> -- wrapper transformation:
>
> addToForever b =
> case b of
> I# b# -> addToForever_worker b#
>
> addToForever_worker b# =
> let b = I# b#
> in case forever () of
> I# a# ->
> case b of
> I# b# -> a# +# b#
>
> ==
> -- The semantics have changed---b will now be evaluated before
> -- forever().
Contextually, the original and the worker/wrapper versions do not
differ. I.e. there is no program into which the two could be inserted
which would detect a difference. So their semantics should be regarded
as equal.
> PS - Again, we don't try to recover from errors. This is where the
> comparison with IEEE arithmetic breaks down: NaNs are specifically
> designed so you can _test_ for them and take action. I'll also point
> out that infinities are _not_ exceptional values; they're semantically
> "at the same level" as regular floats---so the following comparison is
> a bit disingenuous:
> > If error \equiv bottom and you extend, say, Int with NaNs, how do you
> > implement arithmetic such that Infinity + Infinity \equiv Infinity and
> > Infinity/Infinity \equiv Invalid Operation?
Infinity was chosen as an example: from what you described I had the
impression the implementation needed to match on NaN constructors at
some point. Is this not the case?
Marko
From jmaessen@mit.edu Fri Jan 26 20:53:51 2001
From: jmaessen@mit.edu (Jan-Willem Maessen)
Date: Fri, 26 Jan 2001 15:53:51 -0500
Subject: A clarification...
Message-ID: <200101262053.PAA00883@lauzeta.mit.edu>
Marko Schuetz replies to me as follows:
> > He uses this, and an argument based on currying, to show that strict
> > functions ought to force their arguments left to right.
>
> I can't see where I did. I argued that distinguishing between error
> and bottom seems to not leave much choice for bottom + error.
You're right, sorry. I misread the following from your earlier
message:
> So I'd say error + bottom \equiv bottom and bottom + error \equiv
> bottom.
As you noted:
> and you want, say, error + bottom \equiv error then + can no longer be
> strict in its second argument....
I'd managed to turn this around in my head.
Nonetheless, my fundamental argument stands: If we separate "bottom"
and "error", we have a few choices operationally, and I'm not fond of
them:
1) Make "error" a representable value. This appears closest to what
you were describing above:
case ERROR of x -> expr => expr [ERROR/x]
This is tricky for unboxed types (especially Ints; floats and
pointers aren't so hard; note that we need more than one
distinguishable error value in practice if we want to tell the user
something useful about what went wrong).
At this point, by the way, it's not a big leap to flatten the
domain as is done with IEEE NaNs, so that monotonicity wrt errors
is a language-level phenomenon rather than a semantic one and we
can handle exceptions by testing for error values.
2) Weaken the algebraic theory as I discussed in my last message.
3) Reject an operational reading of "case" as forcing evaluation and
continuing and have it "do something special" when it encounters
error:
case ERROR of x -> expr => ERROR glb expr[?/x]
From the programmer's perspective, I'd argue that it's *better* to
signal signal-able errors whenever possible, rather than deferring
them. If nothing else, a signaled error is easier to diagnose than
nontermination! Thus, I'd LIKE:
error + bottom === error
But I'm willing to acknowledge that I can't get this behavior
consistently, except with some sort of fair parallel execution.
I'm doing something along the lines of (3), but I abandon execution
immediately on seeing error---which is consistent only if
error==bottom:
case ERROR of x -> expr => ERROR
[Indeed, the compiler performs this reduction statically where
possible, as it gets rid of a lot of dead code.]
I defer the signaling of errors only if the expression in question is
being evaluated eagerly; for this there is no "case" construct
involved.
-Jan-Willem Maessen
From qrczak@knm.org.pl Fri Jan 26 20:19:47 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 26 Jan 2001 20:19:47 GMT
Subject: A clarification...
References: <200101262053.PAA00883@lauzeta.mit.edu>
Message-ID:
Fri, 26 Jan 2001 15:53:51 -0500, Jan-Willem Maessen pisze:
> 3) Reject an operational reading of "case" as forcing evaluation and
> continuing and have it "do something special" when it encounters
> error:
> case ERROR of x -> expr => ERROR glb expr[?/x]
The subject of errors vs. bottoms is discussed in
http://research.microsoft.com/~simonpj/papers/imprecise-exceptions.ps.gz
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From jmaessen@mit.edu Fri Jan 26 22:34:12 2001
From: jmaessen@mit.edu (Jan-Willem Maessen)
Date: Fri, 26 Jan 2001 17:34:12 -0500
Subject: A clarification...
Message-ID: <200101262234.RAA00954@lauzeta.mit.edu>
qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) further clarifies my
clarification:
> > case ERROR of x -> expr => ERROR glb expr[?/x]
>
> The subject of errors vs. bottoms is discussed in
> http://research.microsoft.com/~simonpj/papers/imprecise-exceptions.ps.gz
Indeed. I crawled through my stack of electronic papers, and couldn't
find the reference. As I noted in my earlier messages, I was
deliberately dodging the issue of *catching* exceptions, as it really
requires a thorough treatment. :-) The "?" in the above reduction is
their special 0 term. They equate divergence with exceptional
convergence within the functional calculus for the reasons I outlined
[I reconstructed those reasons from memory and my own experience, so
differences are due to my faulty recollection]. Their notion of
refinement justifies the reduction I mentioned in my mail:
> > case ERROR of x -> expr => ERROR
In all, an excellent paper for those who are interested in this topic.
-Jan-Willem Maessen
From fjh@cs.mu.oz.au Sat Jan 27 14:24:58 2001
From: fjh@cs.mu.oz.au (Fergus Henderson)
Date: Sun, 28 Jan 2001 01:24:58 +1100
Subject: 'any' and 'all' compared with the rest of the Report
In-Reply-To: <20010126135902I.marko@kinetic.ki.informatik.uni-frankfurt.de>
References: <200101251609.LAA00584@lauzeta.mit.edu> <20010126135902I.marko@kinetic.ki.informatik.uni-frankfurt.de>
Message-ID: <20010128012458.A8108@hg.cs.mu.oz.au>
On 26-Jan-2001, Marko Schuetz wrote:
> I don't think we have absolute freedom. Assuming we want
>
> \forall s : bottom \le s
>
> including s = error, then we should also have error \not\le
> bottom.
You lost me here. Why should we have error \not\le bottom?
Why not just error \not\lt bottom?
--
Fergus Henderson | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: | -- the last words of T. S. Garp.
From wohlstad@cs.ucdavis.edu Sun Jan 28 10:03:24 2001
From: wohlstad@cs.ucdavis.edu (Eric Allen Wohlstadter)
Date: Sun, 28 Jan 2001 02:03:24 -0800 (PST)
Subject: HScript
Message-ID:
I have tried e-mailing the authors of HScript with my problems but they
seem unresponsive. I was hoping maybe someone out there could help me.
When I tried to run the demos I got an error "Could not find Prelude.hs",
so I changed the hugsPath variable in regedit to point to hugs/lib. Then I
got an error "Could not find Int". So I added hugs/lib/exts. Now I get the
error "Addr.hs (line 23): Unknown primitive reference to addrToInt".
Eric Wohlstadter
From marko@ki.informatik.uni-frankfurt.de Sun Jan 28 12:46:46 2001
From: marko@ki.informatik.uni-frankfurt.de (Marko Schuetz)
Date: Sun, 28 Jan 2001 13:46:46 +0100
Subject: 'any' and 'all' compared with the rest of the Report
In-Reply-To: Your message of "Sun, 28 Jan 2001 01:24:58 +1100"
<20010128012458.A8108@hg.cs.mu.oz.au>
References: <20010128012458.A8108@hg.cs.mu.oz.au>
Message-ID: <20010128134646X.marko@kinetic.ki.informatik.uni-frankfurt.de>
From: Fergus Henderson
Subject: Re: 'any' and 'all' compared with the rest of the Report
Date: Sun, 28 Jan 2001 01:24:58 +1100
> On 26-Jan-2001, Marko Schuetz wrote:
> > I don't think we have absolute freedom. Assuming we want
> >
> > \forall s : bottom \le s
> >
> > including s = error, then we should also have error \not\le
> > bottom.
>
> You lost me here. Why should we have error \not\le bottom?
> Why not just error \not\lt bottom?
I assumed a semantic distinction between error and bottom was intended
to accurately model the way the implementation would distinguish or defer
the erroneous computation. You are right that without this assumption
error \not\lt bottom would suffice.
Marko
From ashley@semantic.org Tue Jan 30 08:13:41 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Tue, 30 Jan 2001 00:13:41 -0800
Subject: O'Haskell OOP Polymorphic Functions
Message-ID: <200101300813.AAA13408@mail4.halcyon.com>
At 2001-01-17 17:03, Lennart Augustsson wrote:
>You seem to want dynamic type tests. This is another feature, and
>sometimes a useful one. But it requires carrying around types at
>runtime.
Yes. I tried to do that myself by adding a field, but it seems it can't
be done.
>You might want to look at existential types; it is a similar feature.
I seem to run into a similar problem:
--
class BaseClass s
data Base = forall a. BaseClass a => Base a
class (BaseClass s) => DerivedClass s
data Derived = forall a. DerivedClass a => Derived a
upcast :: Derived -> Base
upcast (Derived d) = Base d
downcast :: Base -> Maybe Derived
--
How do I define downcast?
--
Ashley Yakeley, Seattle WA
From fjh@cs.mu.oz.au Tue Jan 30 10:37:18 2001
From: fjh@cs.mu.oz.au (Fergus Henderson)
Date: Tue, 30 Jan 2001 21:37:18 +1100
Subject: O'Haskell OOP Polymorphic Functions
In-Reply-To: <200101300813.AAA13408@mail4.halcyon.com>
References: <200101300813.AAA13408@mail4.halcyon.com>
Message-ID: <20010130213718.A27792@hg.cs.mu.oz.au>
On 30-Jan-2001, Ashley Yakeley wrote:
> At 2001-01-17 17:03, Lennart Augustsson wrote:
>
> >You seem to want dynamic type tests.
...
> >You might want to look at existential types; it is a similar feature.
>
> I seem to run into a similar problem:
>
> --
> class BaseClass s
> data Base = forall a. BaseClass a => Base a
>
> class (BaseClass s) => DerivedClass s
> data Derived = forall a. DerivedClass a => Derived a
>
> upcast :: Derived -> Base
> upcast (Derived d) = Base d
>
> downcast :: Base -> Maybe Derived
> --
>
> How do I define downcast?
class BaseClass s where
downcast_to_derived :: s -> Maybe Derived
--
Fergus Henderson | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: | -- the last words of T. S. Garp.
From daan@cs.uu.nl Tue Jan 30 11:24:59 2001
From: daan@cs.uu.nl (Daan Leijen)
Date: Tue, 30 Jan 2001 12:24:59 +0100
Subject: HScript
References:
Message-ID: <00c001c08aaf$47ea9640$fd51d383@redmond.corp.microsoft.com>
Hi Eric,
Unfortunately HaskellScript doesn't work with the latest hugs versions.
You need the version of Hugs that is explicitly provided on the
HaskellScript website. (http://www.cs.uu.nl/~daan/download/hugs98may.exe)
Hope this helps,
Daan.
----- Original Message -----
From: "Eric Allen Wohlstadter"
To:
Sent: Sunday, January 28, 2001 11:03 AM
Subject: HScript
> I have tried e-mailing the authors of HScript with my problems but they
> seem unresponsive. I was hoping maybe someone out there could help me.
> When I tried to run the demos I got an error "Could not find Prelude.hs",
> so I changed the hugsPath variable in regedit to point to hugs/lib. Then I
> got an error "Could not find Int". So I added hugs/lib/exts. Now I get the
> error "Addr.hs (line 23): Unknown primitive reference to addrToInt".
>
> Eric Wohlstadter
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
From ashley@semantic.org Tue Jan 30 22:16:08 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Tue, 30 Jan 2001 14:16:08 -0800
Subject: O'Haskell OOP Polymorphic Functions
Message-ID: <200101302216.OAA02054@mail4.halcyon.com>
At 2001-01-30 02:37, Fergus Henderson wrote:
>class BaseClass s where
> downcast_to_derived :: s -> Maybe Derived
Exactly what I was trying to avoid, since now every base class needs to
know about every derived class. This isn't really a practical way to
build an extensible type hierarchy.
--
Ashley Yakeley, Seattle WA
From qrczak@knm.org.pl Tue Jan 30 22:55:59 2001
From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: 30 Jan 2001 22:55:59 GMT
Subject: O'Haskell OOP Polymorphic Functions
References: <200101300813.AAA13408@mail4.halcyon.com>
Message-ID:
Tue, 30 Jan 2001 00:13:41 -0800, Ashley Yakeley pisze:
> How do I define downcast?
You can use a non-standard module Dynamic present in ghc, hbc and Hugs
(I don't know if it's compatible with O'Haskell).
--
__("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
From fjh@cs.mu.oz.au Wed Jan 31 03:44:07 2001
From: fjh@cs.mu.oz.au (Fergus Henderson)
Date: Wed, 31 Jan 2001 14:44:07 +1100
Subject: O'Haskell OOP Polymorphic Functions
In-Reply-To:
References: <200101300813.AAA13408@mail4.halcyon.com>
Message-ID: <20010131144407.A6401@hg.cs.mu.oz.au>
On 30-Jan-2001, Marcin 'Qrczak' Kowalczyk wrote:
> Tue, 30 Jan 2001 00:13:41 -0800, Ashley Yakeley pisze:
>
> > How do I define downcast?
>
> You can use a non-standard module Dynamic present in ghc, hbc and Hugs
> (I don't know if it's compatible with O'Haskell).
That lets you downcast to specific ground types, but it doesn't
let you downcast to a type class constrained type variable.
--
Fergus Henderson | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: | -- the last words of T. S. Garp.
From fjh@cs.mu.oz.au Wed Jan 31 03:52:20 2001
From: fjh@cs.mu.oz.au (Fergus Henderson)
Date: Wed, 31 Jan 2001 14:52:20 +1100
Subject: O'Haskell OOP Polymorphic Functions
In-Reply-To: <200101302216.OAA02054@mail4.halcyon.com>
References: <200101302216.OAA02054@mail4.halcyon.com>
Message-ID: <20010131145220.B6401@hg.cs.mu.oz.au>
On 30-Jan-2001, Ashley Yakeley wrote:
> At 2001-01-30 02:37, Fergus Henderson wrote:
>
> >class BaseClass s where
> > downcast_to_derived :: s -> Maybe Derived
>
> Exactly what I was trying to avoid, since now every base class needs to
> know about every derived class. This isn't really a practical way to
> build an extensible type hierarchy.
Right.
I don't know of any way to do that in Hugs/ghc without the problem that
you mention. Really it needs language support, I think.
(I have no idea if you can do it in O'Haskell.)
Note that there are some nasty semantic interactions with dynamic loading,
which is another feature that it would be nice to support. I think it's
possible to add dynamic loading to Haskell 98 without compromising the
semantics, but if you support dynamic type class casts, or overlapping
instance declarations, then dynamically loading a new module could
change the semantics of existing code, rather than just adding new code.
--
Fergus Henderson | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: | -- the last words of T. S. Garp.
From ashley@semantic.org Wed Jan 31 05:47:36 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Tue, 30 Jan 2001 21:47:36 -0800
Subject: Type Pattern-Matching for Existential Types
Message-ID: <200101310547.VAA23977@mail4.halcyon.com>
At 2001-01-30 19:52, Fergus Henderson wrote:
>On 30-Jan-2001, Ashley Yakeley wrote:
>> At 2001-01-30 02:37, Fergus Henderson wrote:
>>
>> >class BaseClass s where
>> > downcast_to_derived :: s -> Maybe Derived
>>
>> Exactly what I was trying to avoid, since now every base class needs to
>> know about every derived class. This isn't really a practical way to
>> build an extensible type hierarchy.
>
>Right.
>
>I don't know of any way to do that in Hugs/ghc without the problem that
>you mention. Really it needs language support, I think.
>(I have no idea if you can do it in O'Haskell.)
It can't be done in O'Haskell either...
Given that we have existential types, it would be nice to have a
pattern-matching mechanism to get at the inside value. Something like...
--
data Any = forall a. Any a
get :: Any -> Maybe Char
get (Any (c::Char)) = Just c -- bad
get _ = Nothing
--
...but as it stands, this is not legal Haskell, according to Hugs:
ERROR "test.hs" (line 4): Type error in application
*** Expression : Any c
*** Term : c
*** Type : Char
*** Does not match : _0
*** Because : cannot instantiate Skolem constant
This, of course, is because the '::' syntax is for static typing. It
can't be used as a dynamic pattern-test.
Question: how big of a change would it be to add this kind of pattern
matching? Is this a small issue, or does it have large and horrible
implications?
--
Ashley Yakeley, Seattle WA
From lennart@mail.augustsson.net Wed Jan 31 06:16:30 2001
From: lennart@mail.augustsson.net (Lennart Augustsson)
Date: Wed, 31 Jan 2001 01:16:30 -0500
Subject: Type Pattern-Matching for Existential Types
References: <200101310547.VAA23977@mail4.halcyon.com>
Message-ID: <3A77ADBD.8C3F6F6C@mail.augustsson.net>
Ashley Yakeley wrote:
> data Any = forall a. Any a
>
> get :: Any -> Maybe Char
> get (Any (c::Char)) = Just c -- bad
> get _ = Nothing
> --
>
> ...but as it stands, this is not legal Haskell, according to Hugs:
>
> ERROR "test.hs" (line 4): Type error in application
> *** Expression : Any c
> *** Term : c
> *** Type : Char
> *** Does not match : _0
> *** Because : cannot instantiate Skolem constant
>
> This, of course, is because the '::' syntax is for static typing. It
> can't be used as a dynamic pattern-test.
>
> Question: how big of a change would it be to add this kind of pattern
> matching? Is this a small issue, or does it have large and horrible
> implications?
It has large and horrible implications. To do dynamic type tests you need
to carry around the types at runtime. This is not something that Haskell
does (at least you don't have to).
-- Lennart
From ashley@semantic.org Wed Jan 31 06:30:17 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Tue, 30 Jan 2001 22:30:17 -0800
Subject: Type Pattern-Matching for Existential Types
Message-ID: <200101310630.WAA27767@mail4.halcyon.com>
At 2001-01-30 22:16, Lennart Augustsson wrote:
>It has large and horrible implications. To do dynamic type tests you need
>to carry around the types at runtime. This is not something that Haskell
>does (at least you don't have to).
Hmm. In this:
--
data Any = forall a. Any a
a1 = Any 3
a2 = Any 'p'
--
...are you saying that a1 and a2 do not have to carry types at runtime?
--
Ashley Yakeley, Seattle WA
From lennart@mail.augustsson.net Wed Jan 31 06:34:58 2001
From: lennart@mail.augustsson.net (Lennart Augustsson)
Date: Wed, 31 Jan 2001 01:34:58 -0500
Subject: Type Pattern-Matching for Existential Types
References: <200101310630.WAA27767@mail4.halcyon.com>
Message-ID: <3A77B210.9DCACC8A@mail.augustsson.net>
Ashley Yakeley wrote:
> At 2001-01-30 22:16, Lennart Augustsson wrote:
>
> >It has large and horrible implications. To do dynamic type tests you need
> >to carry around the types at runtime. This is not something that Haskell
> >does (at least you don't have to).
>
> Hmm. In this:
>
> --
> data Any = forall a. Any a
>
> a1 = Any 3
> a2 = Any 'p'
> --
>
> ...are you saying that a1 and a2 do not have to carry types at runtime?
That's right.
Your data type is actually degenerate. There is nothing you can do what
so ever with a value of type Any (except pass it around).
Slightly more interesting might be
data Foo = forall a . Foo a (a -> Int)
Now you can at least apply the function to the value after pattern matching.
You don't have to carry any types around, because the type system ensures
that you don't misuse the value.
-- Lennart
From fjh@cs.mu.oz.au Wed Jan 31 07:11:13 2001
From: fjh@cs.mu.oz.au (Fergus Henderson)
Date: Wed, 31 Jan 2001 18:11:13 +1100
Subject: Type Pattern-Matching for Existential Types
In-Reply-To: <3A77ADBD.8C3F6F6C@mail.augustsson.net>
References: <200101310547.VAA23977@mail4.halcyon.com> <3A77ADBD.8C3F6F6C@mail.augustsson.net>
Message-ID: <20010131181113.A8469@hg.cs.mu.oz.au>
On 31-Jan-2001, Lennart Augustsson wrote:
> Ashley Yakeley wrote:
>
> > data Any = forall a. Any a
> >
> > get :: Any -> Maybe Char
> > get (Any (c::Char)) = Just c -- bad
> > get _ = Nothing
> > --
> >
> > ...but as it stands, this is not legal Haskell, according to Hugs:
> >
> > ERROR "test.hs" (line 4): Type error in application
> > *** Expression : Any c
> > *** Term : c
> > *** Type : Char
> > *** Does not match : _0
> > *** Because : cannot instantiate Skolem constant
> >
> > This, of course, is because the '::' syntax is for static typing. It
> > can't be used as a dynamic pattern-test.
> >
> > Question: how big of a change would it be to add this kind of pattern
> > matching? Is this a small issue, or does it have large and horrible
> > implications?
>
> It has large and horrible implications. To do dynamic type tests you need
> to carry around the types at runtime. This is not something that Haskell
> does (at least you don't have to).
But you can achieve a similar effect to the example above using the
Hugs/ghc `Dynamic' type. Values of type Dynamic do carry around the
type of encapsulated value.
data Any = forall a. typeable a => Any a
get :: Any -> Maybe Char
get (Any x) = fromDynamic (toDyn x)
This works as expected:
Main> get (Any 'c')
Just 'c'
Main> get (Any "c")
Nothing
Main> get (Any 42)
ERROR: Unresolved overloading
*** Type : (Typeable a, Num a) => Maybe Char
*** Expression : get (Any 42)
Main> get (Any (42 :: Int))
Nothing
--
Fergus Henderson | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: | -- the last words of T. S. Garp.
From nordland@cse.ogi.edu Wed Jan 31 07:11:19 2001
From: nordland@cse.ogi.edu (Johan Nordlander)
Date: Tue, 30 Jan 2001 23:11:19 -0800
Subject: Type Pattern-Matching for Existential Types
References: <200101310547.VAA23977@mail4.halcyon.com> <3A77ADBD.8C3F6F6C@mail.augustsson.net>
Message-ID: <3A77BA98.E4C31D2D@cse.ogi.edu>
Lennart Augustsson wrote:
>
> Ashley Yakeley wrote:
>
> > data Any = forall a. Any a
> >
> > get :: Any -> Maybe Char
> > get (Any (c::Char)) = Just c -- bad
> > get _ = Nothing
> > --
> >
> > ...but as it stands, this is not legal Haskell, according to Hugs:
> >
> > ERROR "test.hs" (line 4): Type error in application
> > *** Expression : Any c
> > *** Term : c
> > *** Type : Char
> > *** Does not match : _0
> > *** Because : cannot instantiate Skolem constant
> >
> > This, of course, is because the '::' syntax is for static typing. It
> > can't be used as a dynamic pattern-test.
> >
> > Question: how big of a change would it be to add this kind of pattern
> > matching? Is this a small issue, or does it have large and horrible
> > implications?
>
> It has large and horrible implications. To do dynamic type tests you need
> to carry around the types at runtime. This is not something that Haskell
> does (at least you don't have to).
>
> -- Lennart
It can also be questioned from a software engineering standpoint. Much
of the purpose with existential types is to provide information hiding;
that is, the user of an existentially quantified type is not supposed to
know its concrete representation. The merits of an information hiding
dicipline is probably no news to anybody on this list.
However, this whole idea gets forfeited if it's possible to look behind
the abstraction barrier by pattern-matching on the representation.
Allowing that is a little like saying "this document is secret, but if
you're able to guess its contents, I'll gladly confirm it to you!". The
same argument also applies to information hiding achieved by coercing a
record to a supertype.
This doesn't mean that I can't see the benefit of dynamic type checking
for certain problems. But it should be brought in mind that such a
feature is a separate issue, not to be confused with existential types
or subtyping. And as Lennart says, it's a feature with large (and
horrible!) implications to the implementation of a language.
-- Johan
From ashley@semantic.org Wed Jan 31 07:31:36 2001
From: ashley@semantic.org (Ashley Yakeley)
Date: Tue, 30 Jan 2001 23:31:36 -0800
Subject: Type Pattern-Matching for Existential Types
Message-ID: <200101310731.XAA02809@mail4.halcyon.com>
At 2001-01-30 23:11, Johan Nordlander wrote:
>However, this whole idea gets forfeited if it's possible to look behind
>the abstraction barrier by pattern-matching on the representation.
Isn't this information-hiding more appropriately achieved by hiding the
constructor?
--
data IntOrChar = MkInt Int | MkChar Char
data Any = forall a. MkAny a
--
Surely simply hiding MkInt, MkChar and MkAny prevents peeking?
--
Ashley Yakeley, Seattle WA
From nordland@cse.ogi.edu Wed Jan 31 08:35:55 2001
From: nordland@cse.ogi.edu (Johan Nordlander)
Date: Wed, 31 Jan 2001 00:35:55 -0800
Subject: Type Pattern-Matching for Existential Types
References: <200101310731.XAA02809@mail4.halcyon.com>
Message-ID: <3A77CE6C.F99C01F1@cse.ogi.edu>
Ashley Yakeley wrote:
>
> At 2001-01-30 23:11, Johan Nordlander wrote:
>
> >However, this whole idea gets forfeited if it's possible to look behind
> >the abstraction barrier by pattern-matching on the representation.
>
> Isn't this information-hiding more appropriately achieved by hiding the
> constructor?
>
> --
> data IntOrChar = MkInt Int | MkChar Char
> data Any = forall a. MkAny a
> --
>
> Surely simply hiding MkInt, MkChar and MkAny prevents peeking?
This is the simple way of obtaining an abstract datatype that can have
only one static implementation, and as such it can indeed be understood
in terms of existential types. But if you want to define an abstract
datatype that will allow several different implementations to be around
at run-time, you'll need the full support of existential types in the language.
But might also want to consider the analogy between your example, and a
system which carries type information around at runtime. Indeed,
labelled sums is a natural way of achieving a universe of values tagged
with their type. The difference to real dynamic typing is of course
that for labelled sums the set of possible choices is always closed,
which is also what makes their implementation relatively simple (this
still holds in O'Haskell, by the way, despite subtyping).
Nevertheless, (by continuing the analogy with your proposal for type
pattern-matching) you can think of type matching as as a system where
the constructors can't be completely hidden, just become deassociated
with any particular "existentially quantified" type. That is, MkInt and
MkChar would still be allowed outside the scope of the abstract type
IntOrChar, they just wouldn't be seen as constructors specifically
associated with that type. Clearly that would severely limit the
usefulness of the type abstraction feature.
-- Johan
From fjh@cs.mu.oz.au Wed Jan 31 08:48:45 2001
From: fjh@cs.mu.oz.au (Fergus Henderson)
Date: Wed, 31 Jan 2001 19:48:45 +1100
Subject: Type Pattern-Matching for Existential Types
In-Reply-To: <3A77BA98.E4C31D2D@cse.ogi.edu>
References: <200101310547.VAA23977@mail4.halcyon.com> <3A77ADBD.8C3F6F6C@mail.augustsson.net> <3A77BA98.E4C31D2D@cse.ogi.edu>
Message-ID: <20010131194844.A9086@hg.cs.mu.oz.au>
On 30-Jan-2001, Johan Nordlander wrote:
> It can also be questioned from a software engineering standpoint. Much
> of the purpose with existential types is to provide information hiding;
> that is, the user of an existentially quantified type is not supposed to
> know its concrete representation. The merits of an information hiding
> dicipline is probably no news to anybody on this list.
>
> However, this whole idea gets forfeited if it's possible to look behind
> the abstraction barrier by pattern-matching on the representation.
That's a good argument for dynamic type casts not being available by
default. However, there are certainly times when the designer of an
interface wants some degree of abstraction, but does not want to
prohibit dynamic type class casts. The language should permit the
designer to express that intention, e.g. using the `Typeable' type
class constraint.
> Allowing that is a little like saying "this document is secret, but if
> you're able to guess its contents, I'll gladly confirm it to you!".
Yes. But there are times when something like that is indeed what you
want to say. The language should permit you to say it.
The specific language that you have chosen has some connotations
which imply that this would be undesirable. But saying "this data
structure is abstract, but you are permitted to downcast it" is
really not a bad thing to say. There's different levels of secrecy,
and not everything needs to be completely secret; for some things it's
much better to allow downcasting, so long as you are explicit about it.
> This doesn't mean that I can't see the benefit of dynamic type checking
> for certain problems. But it should be brought in mind that such a
> feature is a separate issue, not to be confused with existential types
> or subtyping.
OK, perhaps we are in agreement after all.
--
Fergus Henderson | "I have always known that the pursuit
| of excellence is a lethal habit"
WWW: | -- the last words of T. S. Garp.
From C.Reinke@ukc.ac.uk Wed Jan 31 13:19:16 2001
From: C.Reinke@ukc.ac.uk (C.Reinke)
Date: Wed, 31 Jan 2001 13:19:16 +0000
Subject: Type Pattern-Matching for Existential Types
In-Reply-To: Message from Johan Nordlander
of "Tue, 30 Jan 2001 23:11:19 PST." <3A77BA98.E4C31D2D@cse.ogi.edu>
Message-ID:
> > > data Any = forall a. Any a
> > >
> > > get :: Any -> Maybe Char
> > > get (Any (c::Char)) = Just c -- bad
> > > get _ = Nothing
..
> It can also be questioned from a software engineering standpoint. Much
> of the purpose with existential types is to provide information hiding;
> that is, the user of an existentially quantified type is not supposed to
> know its concrete representation. The merits of an information hiding
> dicipline is probably no news to anybody on this list.
This discussion reminds me of an old paper by MacQueen:
David MacQueen. Using dependent types to express modular structure.
In Proc. 13th ACM SIGPLAN-SIGACT Symposium on Principles of
Programming Languages, pages 277--286, January 1986.
He discusses some of the disadvantages of using plain existential
quantification for modular programming purposes and proposes an
alternative based on dependent sums, represented as pairing the witness
type with the expression in which it is used, instead of existentials,
where there is no witness for the existentially quantified type.
See CiteSeer for some of the follow-on work that references MacQueen's
paper:
http://citeseer.nj.nec.com/context/32982/0
Some of that work tried to find a balance between having no witness
types and carrying witness types around at runtime. In this context,
Claudio Russo's work might be of interest, as he proposes to avoid the
problems of existentials without going into (value-)dependent types:
http://www.dcs.ed.ac.uk/home/cvr/
Claus
From Tom.Pledger@peace.com Wed Jan 31 21:31:56 2001
From: Tom.Pledger@peace.com (Tom Pledger)
Date: Thu, 1 Feb 2001 10:31:56 +1300
Subject: Type Pattern-Matching for Existential Types
In-Reply-To: <3A77B210.9DCACC8A@mail.augustsson.net>
References: <200101310630.WAA27767@mail4.halcyon.com>
<3A77B210.9DCACC8A@mail.augustsson.net>
Message-ID: <14968.33868.475920.268452@waytogo.peace.co.nz>
Lennart Augustsson writes:
[...]
> Slightly more interesting might be
> data Foo = forall a . Foo a (a -> Int)
>
> Now you can at least apply the function to the value after pattern
> matching. You don't have to carry any types around, because the
> type system ensures that you don't misuse the value.
Hi.
In that particular example, I'd be more inclined to use the
existential properties of lazy evaluation:
packFoo x f = Foo x f
-- The actual type of x is buried in the existential
data Bar = Bar Int
packBar x f = Bar (f x)
-- The actual type of x is buried in the (f x) suspension
Of course, when there are more usable things which can be retrieved
from the type, an explicit existential is more appealing:
data Foo' = forall a . Ord a => Foo' [a]
packFoo' xs = Foo' xs
-- The actual type of xs is buried in the existential
data Bar' = Bar' [[Ordering]]
packBar' xs = Bar' oss where
oss = [[compare x1 x2 | x2 <- xs] | x1 <- xs]
-- The actual type of xs is buried in the oss suspension, which
-- is rather bloated
Regards,
Tom