profile
viewpoint

endpoints4s/endpoints4s 314

Scala library to define HTTP-based communication protocols

harpocrates/inline-rust 205

Use snippets of Rust inline in your Haskell programs

harpocrates/language-rust 75

Parser and pretty-printer for the Rust language

harpocrates/Brainfuck 4

Small Python Brainfuck interpreter and text-encoder.

GaloisInc/covid-19 3

Finding ways to leverage our existing technology and expertise to rapidly aid policy-makers and scientists battle the COVID-19 crisis

harpocrates/Encryption 2

Short RSA/ECC implementations written in Haskell.

harpocrates/Fun 2

Interpreter(s) for prototyping evaluation/typing rules.

harpocrates/fgl-scala 1

Manipulate an immutable graph type (based on Haskell's `fgl`)

harpocrates/JavaSc-heme-ript- 1

Simple Scheme implementation written in CoffeeScript.

harpocrates/akka 0

Build highly concurrent, distributed, and resilient message-driven applications on the JVM

push eventGaloisInc/asl-translator

Brian Huffman

commit sha 4c834b5dbe550442bbd00ce04b5cfe717e7c2982

Switch from `ansi-wl-pprint` to `prettyprinter` package. This patch includes and adapts to the following submodule PRs: - GaloisInc/what4#77 "prettyprinter" - GaloisInc/crucible#586 "prettyprinter"

view details

Jared Weakly

commit sha a00935b8ba9a8bcbaed5c186ca782163f87883e7

Fix setup-haskell version

view details

brianhuffman

commit sha 943cf7d724b15c6ae4b650061f96c8bf0b63344a

Merge pull request #28 from GaloisInc/prettyprinter Switch from `ansi-wl-pprint` to `prettyprinter` package.

view details

push time in 17 minutes

PR merged GaloisInc/asl-translator

Switch from `ansi-wl-pprint` to `prettyprinter` package.

This patch includes and adapts to the following submodule PRs:

  • GaloisInc/what4#77 "prettyprinter"
  • GaloisInc/crucible#586 "prettyprinter"
+27 -23

5 comments

7 changed files

brianhuffman

pr closed time in 17 minutes

pull request commentGaloisInc/asl-translator

Switch from `ansi-wl-pprint` to `prettyprinter` package.

Thanks for the fix, @jared-w!

brianhuffman

comment created time in 17 minutes

pull request commentghc-proposals/ghc-proposals

Generalised type family injectivity

@goldfirere We've been thinking about some of the awkward cases in the paper this morning. For awkward case 2 specifically, it appears the two examples are fundamentally different. The type family W1:

type family W1 a = r | r -> a
type family W1 [a] = a

is injective on the set of types for which it is actually defined, while for W2, well, perhaps it's really injective because it is in actuality the empty function, but there is no way that it could ever be defined anywhere and remain injective.

type family W2 a = r | r -> a
type instance W2 [a] = W2 a

If we assume that W2 [a] is defined, then we would have W2 [[a]] = W2 [a] = W2 a which would contradict injectivity immediately. So this instance for W2 in some way causes an actual violation of the injectivity constraint.

Let's think some more about W1 though: the form of injectivity that we're deriving for the instance looks like:

W1 [a] ~ W1 [b] => a ~ b

but really, we should only conclude this if W1 [a] and W1 [b] are both defined. As the example in the paper shows, we have:

W1 [W1 Int] ~ W1 Int

per the beta reduction of the instance itself, but if we're allowed to apply injectivity, we'll have

[W1 Int] ~ Int

The problem here is that we're dealing with functions whose domain hasn't been correctly modelled and trying to conclude things using injectivity for points that don't actually lie in the domain of the function. What if we had a new constraint predicate defined that could be applied to a type-level expression involving type families, and which could only be satisfied once no type family applications or type variables occur in it? (This latter restriction because type variables might be instantiated to type family applicatons.) Then the injectivity law we would infer for the W1 instance would be:

(defined (W1 [a]), defined (W1 [b]), W1 [a] ~ W1 [b]) => a ~ b

We could immediately evaluate the W1 [a] and W1 [b] occurring under the defined predicates to obtain:

(defined a, defined b, W1 [a] ~ W1 [b]) => a ~ b

So now if we apply this with a |-> W1 Int and b |-> Int, we obtain:

(defined (W1 Int), defined Int, W1 [W1 Int] ~ W1 Int) => W1 Int ~ Int

which we can reduce to:

(defined (W1 Int)) => W1 Int ~ Int

But W1 Int is a stuck type, so the defined (W1 Int) constraint will never go away. Moreover, we could potentially observe that any instance defining W1 Int would violate the functional dependency, but that's beyond what is required to prevent this source of potential unsoundness.

While there's still much more to think about, this may be at least one missing ingredient. What do you think?

cgibbard

comment created time in an hour

issue commenthaskell/haddock

Broken link to idoc in the docs

I see problems with other "Acknowledgements" links as well:

  • HDoc – http://www.fmi.uni-passau.de/~groessli/hdoc/
  • Doxygen – http://www.stack.nl/~dimitri/doxygen/
vrom911

comment created time in an hour

pull request commentghc-proposals/ghc-proposals

Visible 'forall' in types of terms

Would the T2T translation be required then?

I believe that T2T will be needed for a long time, as I wrote under (4) here.

I think it's actually rather a nice way to carve out the subset of the term language that is allowed to live in the static (= irrelevant, = phantom) part of the language. One day we may allow monad comprehensions, local recursive lets etc in types; but we can get a LOT of the benefits of DH without going that far, and T2D gives us a knob that we can move, one click at a time, as we add more features of the relevant (dynamic, coporeal) language into the static (irrelevant, phantom) sub-language.

int-index

comment created time in 2 hours

issue openedhaskell/haddock

Broken link to idoc in the docs

The link in the Introduction section to idoc doesn't work for everyone: http://www.cse.unsw.edu.au/~chak/haskell/idoc/ Ideally, it is better to have a link that could be accessible to everyone. Screenshot 2020-12-02 at 15 37 30

created time in 3 hours

issue openedtweag/linear-base

Design: Consistent & Non-Abitrary Module Structure

The internal module structure has some major (not completely disjoint) problems:

  • Arbitrariness
    • We shouldn't make arbitrary decisions as to what is or isn't in an internal module.
    • Example: why is flip defined in Prelude.Linear but not in Prelude.Linear.Internal? What is the rule by which we determine what goes in an internal module and what goes together?
    • Example: the same question applies to (<$) in Data.Functor.Linear?
    • What exactly is the reason we put ReaderT in Control.Monad.Linear but not in the internal modules?
    • What is the rule by which we decided to put Consumable and Dupable and Movable in Data.Unrestricted? With the exception of the last, these seem like separate concepts.
  • Cohesion
    • Internal modules should have clear boundaries that ideally are reflected in their name. Internal doesn't communicate what is internal. I should have some idea of what ought to be in this module and what ought not to be -- this way, if I see it imported somewhere I know basically what is imported. This ideally comes from each module having a clearly defined and self contained purpose, usually about a singular concept.
    • Example: when I look at Data.Unrestricted.Linear and see the import Data.Functor.Linear.Internal I have no idea what this is importing. Is it importing all of Data.Functor.Linear essentially? It seems the purpose is to avoid a cycle of module dependencies but this still tells me very little. It seems like the original programmer has a clear idea of what that module is and what it does but as a new reader, I do not. This is a problem: there shouldn't be this silent knowledge or "internal wisdom" that makes it hard to do things unless you know all this background that isn't explicitly written down. [Cf., normalization of deviance.] If stuff like this grows, this becomes unsustainable.
    • Example: When I look at imports of Control.Monad.Linear.Internal I have no idea what they are really importing. I don't know the boundaries of what is and is not in this module, nor do I understand it's purpose.
  • Consistency
    • The template by which we make internal modules should be consistent. If one part of the codebase just makes internal modules called Internal which just holds anything and everything core that requires no internal dependencies and other parts like Streaming have internal modules like Streaming.Type this is inconsistent.

I propose the following system and subsequent changes:

  • System
    • We write this system down in the Design.md document.
    • Internal modules are not named 'Internal' and instead have the name of a single concept, which defines clear boundaries of what one can expect or not expect to be in them.
    • It's good to have a lot of internal modules if each of them is simple. Concrete example: it's good to have Data.Unrestricted.Consumable because it isolates a single self-contained concept, and if the consumable utilities grow, we don't want them to get messy. We also avoid a few module dependency issues this way.
  • Changes
    • We remove the internal Data.Functor and Control.Monad modules, establish Data.Functorial.Linear and Control.Functorial.Linear in which we re-export everything and make internal modules according to the system I've described above.
    • We divide up Data.Unrestricted.Linear
    • We make Data.SymmetricMonoidal.Linear
    • We do as we did with functors to Data.Profunctor.*
    • We make similar changes wherever else there are internal modules or ought to be because there are too many ideas in a file that aren't unified under a clear purpose or concept to the module.

I recognize this system is abstract but hopefully there are enough examples to give a clear enough idea of my proposed solution.

created time in 3 hours

pull request commentGaloisInc/asl-translator

Switch from `ansi-wl-pprint` to `prettyprinter` package.

Long story short, the README for setup-haskell used the v1.1 tag as that tag is the one that introduced some new features like the outputs. I had expected it to be updated along with the v1 tag, but it never was. So the README should've used the v1 tag in its examples.

brianhuffman

comment created time in 3 hours

push eventghc-proposals/ghc-proposals

Joachim Breitner

commit sha 4866144119c9cebde61585ac5b227f53e19be0e4

More votes

view details

push time in 3 hours

push eventGaloisInc/asl-translator

Jared Weakly

commit sha a00935b8ba9a8bcbaed5c186ca782163f87883e7

Fix setup-haskell version

view details

push time in 3 hours

pull request commentghc-proposals/ghc-proposals

NoIncomplete

@adamgundry I'm willing to admit that a -Werror=incomplete default packs the same punch, though I don't think a -Wwarn=incomplete default does. On the other hand, a -Werror=incomplete default is a big breaking change that I assume is totally out of the question. (I as said above, I wouldn't submit NoIncomplete to the report until it was precisely specified and not just lower-bounded.)

Beyond the existing connotations of language extensions between petitions to change Haskell, and warnings not being that, there is also the advantage that an extensions is all or nothing (off or error), whilest the -fdefer-* debugging aid is separate probably discovered after. That makes the "order of intentionality" [status quo, compile-time error, run-time error], which I think is right and hard to create by other means.


@ndmitchell Ah, I didn't know your thesis was on that! I'll check it out, and appreciate that you've been going after the same goal, too.

Ericson2314

comment created time in 4 hours

pull request commentghc-proposals/ghc-proposals

A syntax for Modifiers

There seem to be a similar flavour from this new variant of the proposal than the fully settled idea previously discussed at https://github.com/ghc-proposals/ghc-proposals/pull/216#issuecomment-598744392 . Which makes sense to me.

goldfirere

comment created time in 4 hours

pull request commentghc-proposals/ghc-proposals

Visible 'forall' in types of terms

@serras if I may give it a shot:

You mention that terms and types need to be unified (which I would expect from a dependently-type language, Twelf seems to be the special case here). Would the T2T translation be required then? Or would I just get the full term language in indices?

Full term languages. The T2T translation is fundamentally about some phases of compilation (parsing, renaming) seeing the term language, and another phase (type checking) seeing the type language. But once those are the same, all 3 phases are seeing the same term-type language, and there is no mismatched interface between phases to duck-tape.

Put another way, the interpretative of a type argument is temporarily becoming context sensitive, but later is restored to be context insensitive.

Right now we have type families and term functions: would those be merged too in an hypothetical Dependent future?

Either merged or deprecated. Type families have many issues, chiefly their pervasive partiality as described in https://arxiv.org/pdf/1706.09715.pdf, but also other things, like the associated ones having "redundant" arguments. Implementation wise, they are more like "quotiented uninterpreted functions" than "functions", since the partiality shows up not as an error, but a stuck term. This is all weird (and annoying to user!), and is the hallmark of trying to do everything equality proofs because that was the sole tool on the workbench at the time.

I think trying to fix that stuff will be a massive breaking change---Any stops working, for example----so we are better off just deprecating type families while scavenging the implementation for parts.

How are type classes envisioned in this future? Until now most of the example relate to data types and functions, but that's another important piece of what gives Haskell its distinctive flavor.

Maybe check out https://arxiv.org/pdf/1905.13706.pdf, a paper I still need to fully grock --- I think it's less the type classes are changed, than the new interactions between constraints, coercions, and terms yield fun stuff. For example, I sincerely hope that in case x of Just y -> ..., x ~ Just y becomes given to the right of the ->.

At the least, we can do "value classes" like:

class ( m ~ Succ n
      , product (fmap ((+ 1) . hopefullySomethingBetterThanFromIntegral) factors) ~ m
      ) => Composite (m : Natural) where
  factors :: NonEmpty (Fin n)
int-index

comment created time in 4 hours

pull request commentghc-proposals/ghc-proposals

NoIncomplete

I guess I invite you to propose an alternative roadmap to change these norms on a comparable budget, because all I envision are plans where nothing changes until someone spends 3 months writing formalizations of a hierarchy of pattern checkers checkers, and then spends another 3 months refactoring GHC to implement them.

I spent 530 comments and 8 months on just one single prong of the RecordDotSyntax proposal :P. This process is not meant to be cheap, it's meant to be somewhat formalised.

I guess I'm still not convinced your plan gets us to the end state you want cheaply. It adds an extension. It doesn't account for evangelising and changing of social norms where I think 3 years would be to little (btw, I spent a PhD writing a pattern match checker and trying to change norms - so I dropped at least 2 years on this crusade without moving the needle). My suggestion would be add a warning (no more effort than your plan) then evangelise (no more effort than your plan). I think your plan assumes that a language extension gives you a significant amount of that evangelising for free, but I suspect not.

I don't think I've got anything more to say, so I leave it to the committee to decide. (This isn't me rage quitting, I just think we have genuinely held differing views, so leaving it to an independent arbiter makes sense.)

Ericson2314

comment created time in 4 hours

pull request commentghc-proposals/ghc-proposals

NoIncomplete

We can package up existing warnings into a new convenience -Wincomplete, but I don't think that's going to move the needle on the norms between GHC, its users, and its developers.

I think what matters here are defaults, rather than whether the more-restrictive behaviour is activated by -XNoIncomplete or by -Wincomplete. Personally I'd favour adding a -Wincomplete option, and then arguing that -Wwarn=incomplete or even -Werror=incomplete should be enabled by default.

Ericson2314

comment created time in 4 hours

pull request commentghc-proposals/ghc-proposals

NoIncomplete

Once you have the warning in a robust and specified shape, then it could become a language pragma. Why is -Wincomplete meaningfully worse than NoIncomplete?

It all goes back to how this proposal is about norms. I think it's crazy that GHC is inserting run-time errors i didn't write into my programs, as do others. We can package up existing warnings into a new convenience -Wincomplete, but I don't think that's going to move the needle on the norms between GHC, its users, and its developers. As long as the pattern match checker is just for warnings, it's not going to register as a problem that it's behavior is ad-hoc.

As long as users are turning on warnings, they're going to think they are doing bad things writing incomplete patterns, not GHC is doing bad things allowing them too, just as programers of untyped languages blame them selves about run-time "type" errors, or programmers without Maybe blame themselves for "null pointer exception".

I'm not sure I'd use either of those as models of how we should do things in future.

They are not ideal models, but I think they'll suffice. We have decent test coverage that the pattern match checker, however unspecified, doesn't start complaining about code it used to accept. That should avoid the practical problems ImpredicativeTypes had. But also, ImpredicativeTypes had a disclaimer that it's behavior wasn't fully formalized and was subject to change.

I guess I invite you to propose an alternative roadmap to change these norms on a comparable budget, because all I envision are plans where nothing changes until someone spends 3 months writing formalizations of a hierarchy of pattern checkers checkers, and then spends another 3 months refactoring the pattern match checker to implement them. And, as much as I wish it were otherwise, the problems here are too boring and too beginner-focused ("those in the know just use -Werror and get on with their lives") to attract that level of investment any time soon.

Ericson2314

comment created time in 4 hours

pull request commentghc-proposals/ghc-proposals

Visible 'forall' in types of terms

@goldfirere thanks for taking the time to write down an extensive comment. I've failed in many of these misconceptions myself, and most of them are now clearer. I agree with @simonpj about the usefulness of writing some of these ideas down (but I am aware time is a scarce resource!); but these negative comments are also great, as they oppose DH to other Agda, Idris, Coq, and the rest of the family.

If I may, here are some additional questions I have about the whole idea (please tell me if you prefer me to post them in #378):

  • You mention that terms and types need to be unified (which I would expect from a dependently-type language, Twelf seems to be the special case here). Would the T2T translation be required then? Or would I just get the full term language in indices?
  • Right now we have type families and term functions: would those be merged too in an hypothetical Dependent future?
  • How are type classes envisioned in this future? Until now most of the example relate to data types and functions, but that's another important piece of what gives Haskell its distinctive flavor.
int-index

comment created time in 5 hours

PR opened tweag/linear-base

Functor & Monad Module Organization

This is very much a draft and not ready for any substantial review.

We need to rethink our module organization, because it's messy, unsound, often absurd, and unsustainable. So far, I've just done a minimal amount to get the src/Data/Functor/* stuff separated but there's a ton more to just break things into a clean structure. Lots more to say as I continue. Unfortunately since this stuff is too tightly coupled, this won't be a small PR; it can't be more modular than something that touches a lot of files.

+204 -135

0 comment

20 changed files

pr created time in 5 hours

create barnchtweag/linear-base

branch : module-org/functors

created branch time in 5 hours

PR opened haskell/haddock

Use gender neutral word in docs
+8 -8

0 comment

1 changed file

pr created time in 5 hours

pull request commentGaloisInc/asl-translator

Switch from `ansi-wl-pprint` to `prettyprinter` package.

GaloisInc/macaw#178 depends on this PR to pass CI, so I'd like to get this one merged before merging that one.

brianhuffman

comment created time in 5 hours

pull request commentghc-proposals/ghc-proposals

DYSFUNCTIONAL per-instance pragma for selective lifting of the coverage condition

@simonpj I went ahead and implemented my proposed change from above to the fundep consistency check here.

The relevant bit is here.

Basically, when we check pairwise consistency of the instance with all the other instances in scope, we default to the strict check (advertised in Note [Bogus consistency check]) unless one of the instances is DYSFUNCTIONAL - then we fall back to the old, "bogus" check.

A handful of tests needed (trivial) adjustments by marking some instances DYSFUNCTIONAL.

That seems like a good compromise to me. Please have a look and if that approach makes sense to you, I'll update the proposal to include this change.

@AntC2 if your solution works only for full fundeps, (i.e. it wouldn't work for class F s t a b | s -> a, t -> b, s b -> t, t a -> s), then I'm afraid it's insufficient.

arybczak

comment created time in 6 hours

pull request commentghc-proposals/ghc-proposals

NoIncomplete

@Ericson2314 - For instances/constraints GHC was more permissive than the spec, so exactly how it was more permissive got formalised, and turned into extensions. For impredicative types, a known broken extension limped on for a while, breaking peoples code on a semi regular basis with random hacks around $ in the type checker. I'm not sure I'd use either of those as models of how we should do things in future.

Rather than land an open ended ambiguous extension, why not add an open ended ambiguous warning? Many warnings have those properties. Once you have the warning in a robust and specified shape, then it could become a language pragma. Why is -Wincomplete meaningfully worse than NoIncomplete? Usually for warnings it's worse because the warning doesn't have the properties of stability, definition etc - but for NoIncomplete you don't have those anyway.

Splitting it into two language pragmas that only exist in tandem seems worth than either alternative. And if everyone disagrees with me and wants the extension anyway, that's OK. I think it's probably a mistake, but its not a massively costly one.

Ericson2314

comment created time in 7 hours

pull request commentghc-proposals/ghc-proposals

DYSFUNCTIONAL per-instance pragma for selective lifting of the coverage condition

Thanks @AntC2. I think the state of play is this:

  • This DYSFUNCTIONAL proposal makes more explicit a rather unprincipled trick, by selectively lifting fundep conditions. That may give rise to non-termination or (with considerable difficulty) to non-confluence. But not unsoundness. Moreover, it allows us to tighten up the "normal" conditions, so that if you want to stray into this territory you need to explicitly say DYSFUNCTIONAL. So the proposal is a modest improvement over the status quo.

  • In #15632 you proposed a way to loosen the fundep coverage conditions. I think you believe that this will be a bit more expressive, while still retaining confluence and termination. If that was true, it would be a useful step forward.

  • In your GHC proposal about apartness guards, you propose a more radical solution. But because it would be quite a big, far-reaching change, it didn't get much traction (yet anyway).

Is that an accurate summary?

arybczak

comment created time in 7 hours

pull request commentghc-proposals/ghc-proposals

Support ergonomic dependent types

While I'm pro-dependent-types, I think @AntC2 raises a legitimate question that we should be able to answer at the syntactic level without needing all of dependent types: what are we gaining by deprecating the (punned) [ ] and ( ) syntax in the type namespace, and is it worth it? Concretely: a) in "North Star" DH, do we expect to be able to use [a]-the-term on the right of the :? If not, why do we need to deprecate [a]-the-type? b) if so, what will its semantics be there? If they're the same as the current semantics of [a]-the-type, why do we have to deprecate the latter? If they're different, what are the new semantics, and why are they a more valuable use of this piece of prime syntax?

I think the answers are: a) yes, we want to use [a] in type position at least some of the time b) they are different, we want [a] to always-and-everywhere denote the 1-element list consisting of a, even if a is a type and [a] is on the right of the :, and we consider this consistency to be worth the ergonomic sacrifice of having to write List a, either inherently, or at least in code that's going to be making use of [a]-the-term on the right side of the : (i.e. code that's using dependent types to do interesting stuff), and perhaps we acknowledge this comes at a cost of making code that never uses [a]-the-term on the right of the : less ergonomic.

(Presumably the same logic applies to type puns in "userspace" libraries, but from @goldfirere 's comment it seems the intention is that non-DH code may continue to use those puns (though we may see a "soft fork" in the library ecosystem as to whether punned types are desirable or not) for the forseeable future - but not the builtin type puns of [ ] and ( ))

goldfirere

comment created time in 10 hours

pull request commentghc-proposals/ghc-proposals

DYSFUNCTIONAL per-instance pragma for selective lifting of the coverage condition

On reflection ... I suppose ... With higher-rank/impredicative fields, exactly what you don't want is confluence in the D-H-M sense. Perhaps the pragma here should be IMPREDICATIVE or ANTICONFLUENT. Perhaps per-instance if a FunDep is targeting a higher-rank field (can we detect that?), and [QL]Impredicativity is enabled, there doesn't need to be a pragma at all?

I think #8634 was also trying to do something with higher-rank fields?

I know #10675 and friends means we have non-confluence currently. I really don't want to hack into what must be gnarly bits of GHC and get non-confluence-in-general officially blessed. I'd rather see this proposal saying: we're going to fix #10675 and the "terrifying" #18759 and related tickets; we're going to provide a back door for narrowly-specified circumstances, in case anybody was exploiting those faults.

arybczak

comment created time in 13 hours

pull request commentGaloisInc/asl-translator

Switch from `ansi-wl-pprint` to `prettyprinter` package.

Maybe @jared-w is the right person to weigh in on actions/setup-haskell. I'd guess this would be a simple fix to update that action, but don't have a experience with it.

brianhuffman

comment created time in 14 hours

pull request commentghc-proposals/ghc-proposals

Visible 'forall' in types of terms

@gridaphobe

I think it would make a lot of sense if some of them made their way into the proposal text of #378 itself :)

Yes -- good point. I have done this.

int-index

comment created time in 15 hours

more