Hacker News new | past | comments | ask | show | jobs | submit login

I saw Scala taken over and destroyed by the FP fanatics who want to turn every language into Haskell. I hope they don't do the same to Rust. Just use Haskell and leave the rest of us alone!

It sounds silly but its true.. as you increase the complexity of the language, the hardcore users making the libraries adopt those features, which means users have to adopt those features as well. Pretty soon everyone is talking about monad transformers and HKTs and it takes 4 hours to write something that would take 15 minutes in Python.

Then the wave of FP consultancies and trainings show up so they can bill you $10000s on workshops and conferences to teach your devs monads.




> I saw Scala taken over and destroyed by the FP fanatics

I suspect that the above reflects a very personal experience rather than something general. I have been using Scala for 10 years and never used monad transformers. I find Scala code usually easy to write and to read. At this point I wouldn't trade it for any other language.

From where I stand, Scala was neither "taken over", nor "destroyed by FP fanatics". It is not Haskell, and the upcoming Scala 3 is not going in the direction of being more like Haskell. Scala has always been about supporting both object-orientation and functional programming. It's still the case with Scala 3, and it's getting better and better.


I have worked in multiple Scala shops and contributed at the highest levels to the Scala ecosystem and my experiences confirm that this rotten attitude is very real and increasingly the norm, as everyone but the fanatical FP-ers have long-since moved on to other more professional/productive circles.

With the exception of shops using Scala exclusively for Spark, someone entering the ecosystem can expect to be constantly talked down to if you dare to use something deemed by the FP hivemind to be evil (so, anything other than pure, immutable, FP-style with all side effects controlled, no exceptions or nulls, no inheritance (only typeclasses), etc). They'll be derided and skoffed at constantly for being "just a java++ programmer".

Meanwhile, for all their smugness, the FP community has achieved nothing at all that has reached beyond to the world outside of Scala. The projects successful in bringing in developers to Scala have been decidedly in the disparaged lightly-FP/"Java++" style: Spark, Kafka, LinkerD (1.X -- they rewrote 2.X in Go/Rust), Flink, Akka, PlayFramework, Twitter's stack, Prisma, etc. Shockingly, the predominant view among these delusional pure-FP-obsessives, which usually goes unchallenged, is that the creators of Spark don't know anything about Scala or are bad engineers!

Scala 3 is a joke and will do the exact opposite of the stated goals from years back. It was supposed to streamline and simplify the language, remove gotchas, add a few high-impact simple features like Union types and trait parameters. Now it's a grown into a monstrosity of complicated features that your average dev will never use. Rather than streamline and simplify Scala 2's issues with having dozens of ways to do the same thing, it introduces multiple new dimensions by which people can do things in multiple ways (legacy implicit system vs new "given" syntax; braces syntax vs indentation-based).

Control structures helpful for imperative programmers are being removed (you cannot early return from a for loop without a huge hassle), `do-while` is removed, etc. These breakages were not pushed back against because the only people still around are FP'ers who don't want people using loops in the first place.

New type-system features are being added without even knowing if there's a possible use-case. The whole thing is a mess.


I'm not sure why Prisma is called out here. But posts like this bubble up in our Slack, so I thought I should respond...

Prismas query engine is a complex piece of code and receives very little outside contributions. Moreover, Prisma only has bindings for JS/TypeScript and Go at the moment, so there is no way to consume Prisma from Scala. As a result, very few Scala developers know about Prisma.

We enjoyed Scala as a language, and the massive JVM ecosystem is a huge benefit. That said, we were forced to rewrite the query engine in Rust as we had a need for a more modular architecture enabling us to embed parts in JS and Go libraries. We looked at the Scala Native and Graal projects (spent 6 months building a prototype), but neither delivered a sufficiently low memory footprint. The Prisma2 rewrite to rust is a much more stable product, and we love the Rust language.

All the best to both the Scala and Rust ecosystems. Hugs.


I am not sure how much of that is worth responding to. You obviously have a strong opinion and have been burned and appear to have pent up anger. I don't share your opinion.

I use Scala, I love it, I am looking forward very much to Scala 3, and I find help in the community when I need it. And I am not a pure FP programmer, although I like lots of the ideas of FP.

> everyone but the fanatical FP-ers have long-since moved on to other more professional/productive circles

I for one haven't and I am not a "fanatical FP-er". I bet I am not the only one.

> Scala 3 is a joke […] it's a grown into a monstrosity of complicated features that your average dev will never use

I don't think that's true at all.

> New type-system features are being added without even knowing if there's a possible use-case.

I don't know what you are referring to. From the doc, I note:

- intersection types, which are essentially a better way (commutative) of doing `A with B`

- union types, which several languages now have, including TypeScript, and which are definitely a very useful feature (in particular for Java and JavaScript interop, but there are other use cases)

- dependent and polymorphic function types, which are just an extension of what was possible before with methods

- match types and type lambdas, which I cannot comment on

> The whole thing is a mess

I obviously don't see things with the same eyes you do. I am really excited about Scala 3 and I do think it improves the language significantly, as it should.

Scala 3 also aims at solving a very real issue with previous releases of Scala, namely that there is a solid binary compatibility story, within Scala 3.x, but also between Scala 2 and Scala 3.

> the only people still around are FP'ers who don't want people using loops in the first place

I can't remember the last time I used `do-while`. But you can rewrite this trivially to a `while`, which is not going away (in fact, I think that Scalafix will do that for you automatically). In any case, community questions about things like mutability, loops, or more imperative features, typically receive answers to the effect that it's all right to use such constructs, especially in the small (like a the level of a single function). These features are part of Scala and generally accepted. They are used by the Scala standard library, and regularly acknowledged by Martin Odersky. A quick code search finds such uses in libraries such as Circe and Cats.

Regarding non-local `return`, you are also overlooking the fact that this is a frequent cause of confusion and errors. Removing this feature has little or nothing to do with FP fanaticism.

Personally, I can only encourage programmers to look into Scala. It is a fantastic language with great features. It also has an incredible JavaScript story with Scala.js, which is rock-solid. The transition to Scala 3 will be a good time for newcomers to look at the language and its community with a fresh look.


> the [pure] FP community has achieved nothing at all [in the last 20 years]

feels so right, is it true?


Well, that would've been a side-effect


I used Scala2 professionally for a few years. Recently picked up Typescript, it has become a very usable Java++ language.


I’ve been interested in Scala recently, although I’ve heard it’s been unfortunately pigeonholed in industrial use. Would you say it’s worth takin up now or wait for Scala 3?


Haha, I'm the wrong person to ask. I was fortunate enough to be in a hands on senior role, and I promoted a very light Java++ style, to be learned in a 2 hours seminar: immutable collections, case classes, pure functions, impure logging & exceptions, sparingly used interface polymorphism, recursion, filter/map/flatmap/fold. Scala: The Good Parts. To this day I have not learned the first thing about implicits or type variance, and probably there are many many many more Scala features I haven't even heard of. The biggest challenge was helping the rest of the team avoid writing inscrutable sbt plugins. The plus side, it makes little difference Scala2 vs. Scala3. The downside, some teams may scoff at such a mundane approach.


It doesn't matter much in my opinion if you start with Scala 2 now or wait for Scala 3. Scala 3 is 95-99% compatible with Scala 2. There is very little that you would learn in Scala 2, especially as a beginner, that will be truly obsolete when 3 is out (an exception might be symbols, which had very little use anyway). In addition, most existing codebases are in Scala 2 at this time.

(I, and my company, use Scala as a general-purpose language, targeting both the JVM and JavaScript, by the way.)


> I suspect that the above reflects a very personal experience rather than something general.

Isn't the FP community in the Scala world at war?


> Isn't the FP community in the Scala world at war?

I have heard that there have been one or two difficult individuals in certain segments of the FP community, especially 5-10 years ago. I understand that this caused the Scalaz/Cats split. But I have never really needed to care about it and it seems to me that this is history. I could be wrong but it's my personal experience.


Thank you for sharing your experience. Back then when I used Scala this was really turning me off Scala. Good that it all settled and that the Scala community can focus on the more important tasks again.


A comment I made in https://news.ycombinator.com/item?id=25327337 comes to mind.

"Any method of abstraction that you have internalized becomes conceptually free to you." (So for the sake of others, choose wisely which you will expect maintenance programmers to have internalized!)

If you've internalized all of the FP ideas, then adding them to a new environment seems useful and seems to cost nothing. It is hard to keep track of how much burden you are adding for people just learning the environment because, for you, it is free.


It's worse than that. Getting used to something doesn't make it free. It simply makes you unaware of the cost you're constantly paying. Having convoluted abstractions really screws up your thinking. I've seen this first hand with class-oriented programming, Enterprise Java and design patterns. People were 100% convinced they are writing awesome code, when in fact they were wasting time and creating dysfunctional monstrosities. The same illness seem to be re-emerging in FP space thanks to the obsession with types.


I kind of get that feeling, too.

Rust uses "traits" to do both generic-like things and inheritance-like things. It's not clear this was a win. The result seems to be a system which does both badly.

Rust generics are very restrictive. They're not at all like C++ generics. It's not enough that every operation on a type needed by a generic be implemented. The type has to be part of a single trait that provides for all those operations. It's like C++ subclassing. So generics over types defined by others can be impossible to write in Rust. This has no safety benefit, since generics are resolved at compile time and all safety tests can be made after generic expansion.

Traits and fixed array bounds do not play well together. This is considered a bug, and it's been a bug since at least 2017. Generic parameters can only be types, not numbers. This led to a horrible hack involving a Rust crate which has types U1, U2... U50 or so, so small numeric constants can be bashed through the Rust type system.

Not seeing the benefit of all this.

I have to go struggle with another non-helpful "lifetime `'static` required" message from the borrow checker now.


> This has no safety benefit, since generics are resolved at compile time and all safety tests can be made after generic expansion.

The compiler can verify that types are satisfied. It cannot verify that you agree on what those types and operations mean.

For example, `Iterator` isn't very useful without being able to agree on what `Iterator::next`'s `Option` means. So implicit structural traits á la Go is somewhere between useless and actively harmful.

You can have nominal traits while allowing orphan instances (that is, implementations of foreign traits on foreign types). Scala and Haskell both have that. But IMO neither has a good solution for the conflicts that inevitably arise in that case.


const generics are very close to being done, which will fix the array issue you mention by allowing code to be generic over integers as well as types. This is already in for built-in arrays and traits but user traits currently require nightly.

I agree with you about the orphan rule being annoying and wish they’d relax that, but I much prefer traits to C++ duck-typed templates, if only because it makes error messages from passing an unsupported type to a generic function clear and concise as opposed to the thousands of lines of confusing output you often get in C++.


> Traits and fixed array bounds do not play well together. This is considered a bug, and it's been a bug since at least 2017. Generic parameters can only be types, not numbers.

FWIW the remedy for this is being stabilized soon: https://github.com/rust-lang/rust/pull/79135


> Rust uses "traits" to do both generic-like things and inheritance-like things. It's not clear this was a win. The result seems to be a system which does both badly.

IMO the fact that Rust doesn't do inheritance well is a feature, not a bug. My understanding is that most people even in OOP circles have realized that composition is better than inheritance, and I see Rust's choices around it as a reflection of that.

> It's not enough that every operation on a type needed by a generic be implemented. The type has to be part of a single trait that provides for all those operations.

Maybe I'm misunderstanding, but do you know that a generic can combine multiple traits? For example:

  fn foo<T: Copy + Add + Sync>(x: T) {
    ...
  }
Here T must implement Copy and Add and Sync.

Maybe what you're talking about is the fact that methods' identities aren't fully described by their names + types, but also by their trait? i.e.:

    trait TraitA {
     fn foo(&self, x: i32) -> i32;
    }
    
    trait TraitB {
     fn foo(&self, x: i32) -> i32;
    }
    
    struct Foo {
    }
    
    impl TraitA for Foo {
     fn foo(&self, x: i32) -> i32 {
      x + 1
     }
    }
    
    fn requires_b<T: TraitB>(x: T) {
        x.foo(12);
    }
    
    fn main() {
        let f = Foo { };
        requires_b(f);
    }

    error[E0277]: the trait bound `Foo: TraitB` is not satisfied
      --> src/main.rs:25:16
       |
    19 | fn requires_b<T: TraitB>(x: T) {
       |                  ------ required by this bound in `requires_b`
    ...
    25 |     requires_b(f);
       |                ^ the trait `TraitB` is not implemented for `Foo`
This was an opinionated decision made by the Rust team - which I think was a good decision - to address the fact that it's possible to have unwanted overlaps between trait method signatures. Just because fn foo(&self, x: i32) -> i32 is defined for a struct, doesn't mean it's the same foo that you're wanting to call. Distinguishing things by trait strengthens the contract. It also avoids the diamond problem. You're right that this isn't directly related to "safety" in a memory sense, but it's part of an overarching philosophy of Rust that encourages intentionality and discourages footguns.

> Generic parameters can only be types, not numbers.

This is a work in progress and has partially been rolled-out; the full implementation is on the way: https://rust-lang.github.io/rfcs/2000-const-generics.html


The thing I found confusing with traits and their generic system was the choice to include "associative types" for out types instead of also defining them in a generic bound:

    trait Iterator {
        type Out;

        fn next() -> Option<Out>;
    }
vs

    trait Iterator<T> {
        fn next() -> Option<T>;
    }
Multiple ways to do something has made it harder to understand.


> This has no safety benefit, since generics are resolved at compile time and all safety tests can be made after generic expansion.

Rust has macros and procedural macros for the "check after generic expansion" use case. It's a good thing that this is separate from generics; it side-steps a whole lot of incidental complexity that's seen in C++.


> This has no safety benefit, since generics are resolved at compile time

It's not only safety. That's why C++ concepts were invented. https://cpp.godbolt.org/z/8Gcrj6


I've never really grasped FP and I seem to be in an eternal state of confusion about monads but I'm excited about GATs in Rust. They'll allow for traits that hide implementation details like `trait Foo { type Bar<T> = T | Box<T> | Arc<T> }` without dynamic dispatch or ugly hacks.


I work with FP languages for 4 years now and I still have no clue what a Monad or GAT is.

Using FP without reaching for the most advanced techniques (which Haskell seems to employ) is a very valid and viable choice.


I'm not sure if you meant to imply otherwise, but using Haskell without reaching for the most advanced techniques is also a very valid and viable choice.


Definitely, but I'd imagine this is being frowned upon because the realistic (and non-fanatic) folks who use Haskell, use it mostly for these advanced features, I'd imagine. Could be wrong.


There are quite a number of realistic folks who want to use Haskell because of the simplicity of its basic feature set. See, for example

* https://www.simplehaskell.org/

* https://www.snoyman.com/blog/2019/11/boring-haskell-manifest...


Can not resist. The tension between 'basic feature set' and an admittedly superficial reading of the docs is very funny. The manifesto links to https://github.com/commercialhaskell/rio#readme and urges us to use the rio library to get started. Upon opening the rio link and scanning for a list of the 'basic feature set', I stumble upon the first block of quoted code. After removing 39 eoln characters in respect for the HN audience, it reads:

"Our recommended [language extensions] defaults are: AutoDeriveTypeable BangPatterns BinaryLiterals ConstraintKinds DataKinds DefaultSignatures DeriveDataTypeable DeriveFoldable DeriveFunctor DeriveGeneric DeriveTraversable DoAndIfThenElse EmptyDataDecls ExistentialQuantification FlexibleContexts FlexibleInstances FunctionalDependencies GADTs GeneralizedNewtypeDeriving InstanceSigs KindSignatures LambdaCase MonadFailDesugaring MultiParamTypeClasses MultiWayIf NamedFieldPuns NoImplicitPrelude OverloadedStrings PartialTypeSignatures PatternGuards PolyKinds RankNTypes RecordWildCards ScopedTypeVariables StandaloneDeriving TupleSections TypeFamilies TypeSynonymInstances ViewPatterns"

39 language extensions just to get started. This screams 'incredibly complicated', even if perhaps reality is rather more mundane. Consider the 40th language extension: GradualTyping, so perhaps those that would rather write code about data than about types using a half baked and evolving type language (which taken to its logical conclusion will have to become a full fledged theorem prover in the Coq / Idris / Agda / Lean lineage anyways) could get their jobs done.

Wish you guys all the best!


This is a common response to Haskell language extensions. It comes from a misunderstanding of what a language extension is. A Haskell language extension is not "something that radically changes the language"; it is "a small, self-contained, well-tested piece of functionality that for whatever reason wasn't part of the Haskell 2010 standard". In any other language a "language extension" would just be "a feature of the language".


It's pretty flourishing though if you look at the graphs.


If you look at the graphs or some HN comment, apparently, every language is flourishing, including cloture!

Conversely, using the same technique, one could also ascertain that every language is also languishing.


COBOL is flourishing? A nightmare!


I agree its doing great for now. This new addition just feels like a minor cause for concern though. Time will tell how things play out and which faction wins - those who actually build shit or architecture astronauts making towers of monads.


Your claims are at odds with each other. If the people who are passionate about a language are the ones who pursue its advanced features (and therefore somehow force them on the rest of the world), who then is left to "actually build shit"? Why, if simple functionality is such a strong requirement of productivity, aren't those who stick to simple features productive enough to maintain an ecosystem without the Architecture Astronauts building all the infrastructure?

It doesn't track that it's possible to simultaneously ruin a language by sabotaging all of its major libraries with novel features if writing code using novel features is actually incredibly difficult. It certainly doesn't track that, once you have somehow sabotaged a language's major libraries, that nobody bothers to "fix" them by introducing a new, simpler version.


> If the people who are passionate about a language are the ones who pursue its advanced features (and therefore somehow force them on the rest of the world), who then is left to "actually build shit"?

Some people are passionate about the language itself, and programming language theory in general. Others are passionate about solving whatever particular problem their project solves.

A simple thought experiment - think about the most widely used libraries and tools across the whole developer ecosystem. How many are built in Haskell? I count maybe one, Pandoc. How many are built in terrible code bases and languages but chug along anyways? I count thousands. How many wildly successful companies have pristine code bases and how many have trash fire code bases that chug along anyways?


>A simple thought experiment - think about the most widely used libraries and tools across the whole developer ecosystem. How many are built in Haskell? I count maybe one, Pandoc.

Purescript and Elm are two more. If you don't count languages, then Xmonad and Darcs are another two. Both Github and Facebook's efforts in mass source-code searching are written in Haskell (though Facebook's is not really released to the whole developer ecosystem).

This is also a misguided thought experiment - Haskell is relatively unpopular anyways (as Rust is). It has a reputation for being difficult to learn (as Rust does). How many tools across the developer ecosystem are written in Rust? Ripgrep, and maybe Alacritty. Does this reflect badly on Rust? No, it's immature and needs a lot of developer support - which is why much of Rust's development effort is in new libraries.

Does a(n alleged) lack of tools reflect badly on Haskell? No, both because it was for a long time considered an academic language, and because Haskell's great successes have also been outside of the "developer ecosystem" - in webservers, for example.

And none of this addresses my original objection to your point: why, if simplicity is so productive, is it not easy to replace complicated libraries with simpler versions? In Haskell, the answer is that the simpler versions are much less powerful, and the power of advanced languages features is actually a boon for productivity, because encoding your invariants in a good type system saves you work elsewhere. That's the whole benefit of Rust's borrow-checker over C++. There is no real risk of Rust getting "too complicated", because these advanced concepts still let people build shit.


I don't know if this is still widely embraced, but Haskell's motto has traditionally been "Avoid success at all costs." It was meant to be a language that embraced PLT and experimented with cutting-edge techniques, so it's not terribly surprising that it's produced more PLT experiments and hasn't produced as much consumer software as, say, Go, which had essentially the opposite philosophy.


You are right with your concern.

Fortunately, when you look at the latest version, Scala 3, you will see that they put more effort into simplifying the language and removing things that people complained about than adding new things.

To be a bit more concrete:

- Remove certain ways of using implicits and making them less confusing

- Cleaning up the language core and removing features (such as impure macros or constructs that are rarely used or confusing)

- Making syntax easier for many cases without adding new features

- Adding union types (comparable to typescript). This is a new feature and not a small one, but I think it will make the language easier to use for many people.

And I think union types and intersection types are a very practical thing to have. So all in all, I'm happy to see that Scala becomes more practical and less esoteric with this release.


So scala is neither taken over or destroyed and the author of this post stresses multiple times that he does not suggest that this approach by adopted by the rust community, even concluding with:

>But Rust is not Haskell. The ergonomics of GATs, in my opinion, will never compete with higher kinded types on Haskell's home turf. And I'm not at all convinced that it should. Rust is a wonderful language as is. I'm happy to write Rust style in a Rust codebase, and save my Haskell coding for my Haskell codebases.

How is any of this fanatical?


I haven't been paying too much attention to the language development but AFAIK the vast majority of the FP in scala is in libraries (cats/scalaz) and not baked into the language.

Sure the language _has_ higher kinded types and implicits can be twisted to support type classes, but neither of those seems to have been forced in the language from FP fanatics, but rather the language has always included some set of advanced features.


[flagged]


> Look through Knuth, Dijkstra, Turing, etc.

Why even name-drop academics though? Shouldn't you be dismissing them as irrelevant and praising Gates, Jobs and Gosford instead? You know, real INDUSTRY figures?

> find an example of FP--you can't.

Programming with functions... programming with functions... programming with functions. Sure.

> Dijkstra

Here is Dijkstra protesting the replacement of Haskell with Java https://www.cs.utexas.edu/users/EWD/transcriptions/OtherDocs...

> Turing

Turing and Church were contemporaries. The Turing Machine was a landmark achievement useful for reasoning about the limits of computation, but where is it today? Have you tried building anything out of a Turing machine (assuming you had infinite tape)? It's basically Brainfuck. Church's Lambda Calculus predates it and is still useful.


>> etc

John Backus, who can hardly be considered lesser than any of the others on this cherry-picked list, used his Turing Award lecture to rally for functional programming:

http://www.thocp.net/biographies/papers/backus_turingaward_l...


'FP' covers 2 meanings:

A. Pure functions + immutable data structures.

B. Expressive type systems, all the way to compile-time metaprogramming and dependently typed proofs.

Writing programs in style A. is tremendously valuable. Expending too much effort in the fine points of the type system, which invariably is simultaneously both under-expressive and over-expressive, is a complete waste of time. Some critical projects require high defect-free confidence, and for those it's legitimate to go full in formal proofs and take the 10x-100x productivity slowdown. For mere mortals, documenting the structure of the data (json) manipulated by the respective functions suffices.


Turing had about as much to do with real world programming as Alonzo Church so I think that’s equal if you really want to keep score. But there are other famous names in the history of computer science who did support functional programming, eg McCarthy or Backus. And there are “real world” programmers like John Carmack who advocate applying FP techniques to their programs.

But this is a stupid thing to argue about and not relevant to this thread.


I would say that it has a lot to do with CS. Basically there's two equivalent ways to look at computing: as Turing machines or lambda calculus. The first is far more popular as early machines and even our current day hardware is close to this. As machines have become more powerful we can do more useful work using formulation based on function composition. If anything this is a trend. What we don't yet have is common sense/knowledge on how to use it judiciously.


I'm writing this on a system with an FP package manager and anyone programming in any popular language these days will be exposed to a heavy dose of FP constructs and concepts, and as far as Knuth is concerned McIllroy famously burried his grand imperative efforts with a one line functional program 36 years ago. Having said that, your unkind remarks about academic programing language researchers do not strike me as unfounded in all cases.


> not even much to do with computer science

That is very debatable.

Lambda calculus is a BFD in the subfield of programming languages, which is the topic at hand.


> Lambda calculus is a BFD in the subfield of programming languages

This is a good example of abusing the vocabulary of mathematics as is common in the FP world. A programming language is not an object in abstract algebra. You can't add, subtract, or factorize a programming language. It's just software.


"BFD" here means "big f#$%ing deal", not "b??? factorization domain".


"Bounded Factorization Domain" apparently

https://en.wikipedia.org/wiki/Atomic_domain#Special_cases

This is a pretty hilarious misinterpretation

> "Lambda calculus is a bounded factorization domain in the subfield of programming languages. What's the problem?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: