This syntax is...odd. It is not expressive and comes off as a "language hack." The only people who would understand it are those who happen to stumble on this article.
Frankly, if this is an issue that you are concerned about, you should really invest the time to learn (and possibly migrate to) TypeScript. Specifically, see their class documentation which already has support for `public`, `private`, and `protected` [1]. (Edit: read @denisw's reply regarding TS. I make this suggestion specifically for TS's compile-time assistance regarding private field access.)
It actually kind of boggles my mind that V8 engineers sat down and agreed that this would be a good design decision. Granted, the article says that they are proposals—is there any way to provide feedback on this? (Edit: https://github.com/tc39/proposal-class-fields/blob/master/PR...)
One important thing to note here is that this goes much further than `private` in TypeScript. The latter is purely a compile-time illusion. If you write `private x`, your objects' x property is still publically accessible and modifiable as far as the JS runtime is concerned. any other. It‘s just the TypeScript compiler that gives you an error if you access x, and only if it happens to have the type information it needs to do so (if the object ends up in an `any` variable, all bets are off).
This means TypeScript's "private" properties also shows up in the return value of Object.keys() etc. So it‘s a pretty leaky abstraction.
This proposal, on the other hand, tries to solve the much harder problem of of bringing true private instance variables to JS. This means making it impossible to access these variables from outside the project. To do this, they had to introduce a complete new "private namespace" that is different from the `this.<name>` one, because the latter is already reserved for the visible-to-everyone properties (messing with that would have serious compatibility implications). Hence the `#foo` syntax.
That is a very important clarification and I imagine TypeScript will leverage this new feature.
That said, I still don't like the #. I think that will be the biggest issue with adoption (at least based on feedback on this post). Would the spec have to come up with another symbol if they wanted to implement `protected` behavior?
Does this mean that with TypeScript someone could still see private values if they used their developer console on the browser, but with this approach that would be impossible?
Access modifiers in TS are compile time only checks.
This change is not about visibility in dev tools. I'd imagine you'll still be able to see and mutate private fields in dev console, just like you can in a Java debugger, etc.
This change is for libraries, so that if someone consumes a TS library, for example, they can't monkey patch or otherwise touch private fields at runtime.
Agreed. This proposal seems very sloppy, and I'm disappointed to see that it managed to get to Stage 3.
There is good reason to talk about adding some kind of private or controlled access in Javascript beyond closures. I've often wanted exactly that kind of feature. However, it shouldn't be done through the lens of classes. Classes are largely just syntactic sugar over Javascript's real inheritance model, the prototype chain.
Whatever solution is come up with should be something that can described in terms of the prototype chain (or ideally, described in terms of pure objects). That's not to say it shouldn't work with classes, obviously it should. But it should be designed from the bottom up, not the top down.
The fact that this proposal is only applied to classes, and the fact that it does not mention the underlying mechanics about how this would work in the core language underneath classes makes it feel poorly designed. Classes in Javascript aren't magic, we can't just apply an entirely new concept on top of them with no explanation of how it fits into the broader language.
First, like the other commenters, I'm not in love with the perlish # syntax. I'm not sure why it was wise/necessary to add a previously illegal character as a prefix, rather than adding a "private" keyword, but I'm sure there's some reason.
Second, the example affords an opportunity to rant about a pet peeve of mine.
"Now ask yourself, how would you implement this class in JavaScript?"
I wouldn't. Listen folks: unless you're doing some weird/temporary debugging or you need to add some meta-functionality (e.g. logging) to an API that reeallly can't change, don't write getters/setters. If accessing or setting a property needs to do computation or have side effects, refactor so that it's done explicitly by calling a method. Anything that lets you attach invisible effects to normal syntax is an anti-pattern because it makes your code behave differently than it appears to, and therefore it's harder to reason about from the outside.
I'm curious, is JS your primary language? Getters and setters have been around since ~2010 (other languages also make heavy use of them, like Obj-C). I suspect that you only see this as "invisible" because you first learned a language that didn't have getters/setters (and if you started with modern JS, or Obj-C the fact that properties aren't "dumb" would just be a given). Thoughts?
I've been using languages with getters and setters for long enough to be familiar with them. There was a time that I thought they were useful and used them in my own code. But I've been burned enough, and thought enough about language design that I've come to avoid them. I say they're "invisible" because of the property illustrated by these two statements:
foo.bar = 0;
foo.baz = 0;
One of these gets compiled/JIT-ed into a few assembly instructions and the other calls an O(n^2) function (with side effects) on a giant tree structure that foo is a part of. You can't guess which is which from looking at those, unless you've read the source for the foo class and memorized which properties are behind setters. They're "invisible" because you can't see when you're using one.
A good property for code to have is that you can infer its semantics from its syntax with as little outside information as possible. When you read a line of code, you should get reliable clues about what that makes the computer do. You might not need to know /all/ the details - e.g. a method call encapsulates the details of an operation but the name (and documentation) give you a good idea of what's happening.
The problem with getters and setters is that they do more than abstract away details, they cloak potentially important semantic information about code in ways that can be misleading.
By contrast, if I read:
foo.setBar(0);
foo.baz = 0;
I might not know all the details about what setBar does, but at least I have a clue that it's more complicated than the following line.
I've been burned IRL by this kind of thing - I've seen very expensive getters mysteriously ruin hot loops (making what looked like a linear operation into an O(n^3) operation) and developers pulling their hair out because of setters that silently rejected values and all sorts associated headaches. Working in settings where they came up semi-frequently made me feel like complexity could be hidden anywhere, and reduced my confidence that I understood code that I read.
JS and other languages with reflection have a whole host of associated issues. If a field gets refactored into a getter, it's suddenly not enumerable and will silently fall out of a lot of copy operations. Suppose you want to find every place that the setter gets called, so you grep for `.bar = ` but did you remember the myriad of other ways that setter could get called (e.g. foo["bar"] =, foo[someVariable] =, Object.assign, etc.)?
In my experience, unless you have some extremely strong reason why you can't refactor or some very well-defined standards about when to use them, they're not worth the trouble they bring when code suddenly stops behaving the way you expect it to.
Thank you for writing this. I was on the fence about the utility of getters/setters in js, and wasn’t convinced by your first paragraphs. By the end (“if a field gets refactored into a getter, it’s suddenly not enumerable”), I was 97% on board.
E.g. maybe you had a class 'Box' with a field 'area'. Later you added fields 'width' and 'height'. What are you going to do? Change 'area' to 'getArea()' and break the API? Write extra code to ensure that you update area every time you change width or height? Or write a getter that returns width*height?
I'd say getter is the cleanest option in many cases.
"Breaking the API" is what we call "refactoring" in the (extremely common) case where your code is only used in a single project, and not part of a reusable library.
And yeah, if you radically rework how you make your boxes work, that's a good time to do a refactor. But there's no reason to anticipatorily add in a bunch of complexity so that you don't have to convert a value to a property at some hypothetical future point.
(I mean, among other things, if you put your area property behind get/set methods, you still have to go and change all the setters anyway, since in your brave new width 'n' height world, setting area is nonsensical. So your API still breaks, and you still need to change your consuming code. What was the benefit of the getters/setters then?)
Thanks for providing a great example of useful getters. If you're computing a derived property that depends strictly on the data at hand and its attributes, getters are useful. When designing code, follow the Getter Idempotency Principle:
* Given a reference to a data entity, invoking a getter on the data should always return the same result, no matter how many times it is invoked, no matter how the rest of the program or world state evolve.
There are others situations. In those situations explicit code is preferable:
* Change an attribute of a data. Do not use a setter, period. Create a data copy with the attribute changed instead. Even GUI toolkits nowadays are using immutable data patterns with great success, see React.
* Use a data entity as a facade for reading data from a 3rd party API. Instead of using an object that pretends to be data but isn't via a heavy getter, use the 3rd party API explicitly to instantiate your immutable data graph.
* Use a data entity as a facade for updating a 3rd party API. Don't use a setter, instead create an updated copy of your data, then explicitly invoke the 3rd party API with the updated data copy.
This is why languages with properties are fantastic. If you find that you would rather have a getter than a member variable, you just change it to a property. You get all the backwards-compatibility of getters without paying the development cost of them up front.
Not breaking an API is one of the few cases where they make sense, but it's sort of a hack that you have to settle for because of back-compatibility. But even then, you've already broken the API in a lot of cases. In your example, 'area' was a field you could read and potentially set. But by changing it to a derived value, code that sets it no longer makes any sense - what do you do with width and height if somebody sets the area? The underlying model changed in a way that broke the old API either way. So yeah, just change 'area' to 'getArea()'.
> Write extra code to ensure that you update area every time you change width or height?
Unless I change width and height all the time, but only read the area very rarely: of course, and gladly so, since I much prefer that to fetching two values and making a multiplication for what could be a simple fetch instead. And don't get me started on code that uses a getter on a value that doesn't change in a loop. I don't find that stuff "clean" at all. Don't just ask how it looks in the editor, ask what it makes the machine do.
Any one of the proposed alternatives is better than altering the contract of the behaviour of `area`, which is what you're doing if you hide it behind a getter.
This applies for languages which can do static type checks.
In JavaScript, there's no assurance that the object you've got even means 'area' to be a number. It could be a string indicating which warehouse the Box is in.
So yes, you bloody well bump the major version number and the consumer needs to check the errata. Pretending to maintain compatibility when you're actually messing with the meaning of things is not something which should be done silently. What happens when someone calls the 'area' setter? Does it trust the width or the height...?
Please for the love of god learn functional programming. this is a primary example of being stuck thinking there is a RIGHT way to do things according to Object oriented design practices.
Firstly objects in JavaScript are not classes, they are just your good old dictionary value store, to an JavaScript object it doesn't matter if u put a function or a value under its property, under the hood its just a pointer pointing to a memory location. infact you can dynamically change it!
Also if this code is running in the browser you get ZERO security benefits, i can still read your values through the debugger.
But it does impart some very significant slow down, especially if you are calling the getter a LOT.
With Box.area; it literally has to resolve the BOX variable name and jump 1 pointer. with BOX.getArea(); it has to create a new scope and stack for the function call and there is SO MUCH MORE going on behind the scenes.
The right solution here is absolutely to update your area as the width and height change.
please learn functional event driven programming if you are going to give advice on javascript. most of the stuff you learned and got tested on in your HS and College courses about how to properly write object oriented code does not apply to javaScript. ESPECIALLY if performance is vital, which it often is since you interact with user input.
Let's not stoop to the "learn2code" condescension on a forum where we're talking to fellow developers. The person you replied to had a perfectly reasonable point.
I use strict FP languages every day and could not figure out what your point was, and your condescending intro and outro makes it hard to continue the discussion with you.
That means that for every incrementer you create, you've also got to allocate two additional functions: value and increment. They have to be distinct objects, because `incrementer().value === incrementer().value` must yield false.
Debugging is also harder. When you're debugging and you find an incrementer somewhere else in your program (maybe somewhere unexpected), how do you know what kind of thing you have? All you can tell is that it's got a value and an increment method, so you've got to do a bunch of grepping over your source to find where it comes from. If it were a class instance you could ask instance.constructor.name to find out that it's an Incrementer, so you can grep for `class Incrementer`.
Yes, lambda can do everything, which is really cool to learn! But lambda isn't the ultimate. More specialized tools often carry benefits.
Comparing these function references: `a_inc_instance.value === b_inc_instance.value` is true in javascript when the function `value` is a class method.
In the closure/factory created version it is false - each function `value` is a different instance with a different reference. That will involve unnecessary duplication in the interpreter, but also allows for the function to be optimised for its state (monomorphic etc) which can be significant when more than a simple example `x++` operation is involved.
So the class version does entail less memory usage, but the closure permits more JIT optimisation.
> If it were a class instance you could ask instance.constructor.name to find out that it's an Incrementer, so you can grep for `class Incrementer`.
Im not sure how much extra help that kind of thing is, we might instead grep for `symbol\s*=` to find where a symbol was assigned whether it had a .constructor.name property or not. I can see it is a general advantage of having more types, to have summary information besides the names.
If you are trying to evangelize for functional programming this is the wrong way to do it. “OO people” is not a thing. I can code in both functional and OO style and choose the one that best suits the scenario. Segregating into an us vs them or right vs wrong comes off as a bit smug and holier than thou. I’m sure that wasn’t your intent but just sayin.
On the one hand, I hear what you are saying, but on the other hand, I share the original poster’s frustration with what seems to be a culture that wants to force Simula 67 / C++ / Java as the last word on how everything MUST be done.
He’s venting, not evangelizing.
And his solution is a lot shorter and easier to follow than the drawn out OOP style version, even if I would have put the 2 functions in the returned object literal on their own lines.
> a culture that wants to force Simula 67 / C++ / Java as the last word on how everything MUST be done.
I mean OOP is very popular so maybe that's where you are getting that impression from? That said, you really could make the same argument for functional programming on HN. I see a _lot_ of commentary much like the parent comment that touts functional as the obviously better way to do things. It always kind of rubs me wrong. If you don't like OOP don't use it and vice versa. Personally I prefer functional programming and use it when I can but I don't think twice about switching it up to OOP if that's whats needed. Just think that it's important to bring all tools available to bear (<= spelling??) when you are attacking a problem. This job is hard enough without succumbing to dogmatic turf wars you know?
I went to a lunch with Douglas Crockford in London and was lucky enough to be seated next to him. Got to ask lots of questions about what he thought of new JS and he was unequivocal: All the new class shit is a hack by people determined to turn JS into C#. I agree with him. I don't get why people are wasting their time on private class members when features like pattern matching and the pipeline operator are making their way to the surface. Classes are the worst thing to happen to JS.
All the new class shit is a hack by people determined to turn JS into C#.
And that's not a bad thing. As long as JS is effectively mandatory for front end development, it should be approachable and familiar to the widest possible developer audience. I'm hoping for a WebAssembly future where you'll be able to pick your favorite esoteric functional language and have it transparently run in the browser. But until then, it's better for JS to look like C# than Haskell.
I was in the process of writing a reply that argued against this (mainly because it is not as readable as a class implementation), but I sort of agree with you as well.
That said, considering how widespread JavaScript is at this point, I think it is in their best interest to cater to more than one programming style.
> That said, considering how widespread JavaScript is at this point, I think it is in their best interest to cater to more than one programming style.
No! Stop! Bolting 6 different programming paradigms into a language doesn’t make it easier to learn. It makes it harder, because every programmer who learns JavaScript needs to learn all of the tools of the language so they can read the code written by others.
More language features / paradigms make JS less accessible. It is a terrible idea if your goal is to help people learn JS.
It’s a non-fitting abstraction that’s poorly glued on top of a prototype-based language.
As a result, uou have thongs like:
- methods have no access to `this` on the instance of the class. Because they are defined on the prototype. As a result, you have to manually do a .bind(this) in your constructor for every method that needs access to `this`
- meanwhile, if you define a class field: `field = () => this.foo` (with an arrow function), then you have no problems accessing the instance. Because of different scoping rules, binding etc.
- and now there’s private fields. Which are just a hack to make them sorta kinda work
I often use a functional style as the OP's above, instead of classes, but I wonder about performance hit. Calling it often (like many "instances") and creating new functions every time seem less efficient that inheriting methods via prototype.
If I were to create 300 instances of something with actual func^H^H^H^H methods attached, rather than 3 instances, I would probably look into creating a prototype and coping with the perils of “this” usage, but otherwise, I’m not going to lose much sleep about it.
This is why macros and a homoiconic language are such a good thing: you can add syntax sugar to have something which is both straightforwardly implemented and readable.
The JSR instruction is ruining assembly! I mean, you can just push the PC, then JMP to the address. And all that the pesky RTS instruction is doing is popping an address from the stack into the PC. Who needs it?!?
This reads like Google is now attempting to dictate the direction Javascript goes in, by implementing proposals before they've become standards, and shipping them in their browser. Given the dominance of the browser engine, it effectively sets the direction of the language, particularly if people start to use it (which they will).
This is feeling like IE6 all over again. We're getting non-standard implementations of standards, that will further encourage websites to be "Chrome only".
Implementing proposals before they become standards is totally normal and a good way to prove out the proposal. Google's implemented tons of proposals, and generally pulled out the ones that didn't become standardized.
> This reads like Google is now attempting to dictate the direction Javascript goes in, by implementing proposals before they've become standards, and shipping them in their browser.
You seem to be proposing a Catch-22 that would freeze JavaScript forever: no one should implement a feature before it is a standard, but completing the process to become a standard requires implementations to exist. Therefore, no new feature can ever be implemented or become part of the standard.
Since ES6 it seems like the language is having an identity crisis. Trying to cherry pick OO concepts here and there.
If Javascript is determined to become fully object oriented, then it might as well just adopt the tried and true concepts in C++/Java for encapsulation.
This just seems like a painful and less verbose way of forcing encapsulation.
I can appreciate Ruby and Python users might not be happy with this convention but this is going to end up in tears I think, maybe not PHP level but still...a mess!
I always understood that some of the JS language quirks are due to their humble beginnings. Nowadays however I expect professional decision making.
In 2018 I expect that a language which in many other regards follows the C++ language syntax style (semi colon, curly braces, double equality, ...) follow also expectations like that private fields are declared with a private keyword and not some strange new syntax form.
I've seen very few people outside of TC39 express a need for "truly private" fields in JS, and I've seen ever fewer offer positive comments about the syntax. On the other hand, I've seen a good amount of bellyaching about it. I think cramming a fairly unpopular proposal (the #) into a genuinely popular one (class fields) and calling it a day is going to make some folks pretty unhappy.
...at any rate, I was unimpressed with the sigil when I first saw it, and that hasn't changed. I've resigned myself to throwing it in the bin of JS features that I won't be using, like `with`, `eval` and `==`.
I don't understand why public fields need the underscore prefix. Aren't all fields public by default? Why do you need to add a prefix to "hide" public fields?
They don't, that's just the example they chose. Prefixing variables / functions with the underscore has long been a signal for "this is private" in languages that don't provide the functionality.
If they were going to use a new syntax for private members, why not a minus sign: class { -privatefield }? At least there's some precedent for that, and it's already a token in the grammar which currently has no meaning inside a class declaration.
Also: another poster added that private members should just be closed over in functions. Well, I don't find it productive to condemn classes as as a concept, but if JS already has classes being syntactic sugar for a type of methods, it makes sense that private members would be sugar for closed-over variables also.
A huge (from my perspective, at least) unsolved problem with the private members proposal is that you can't use it to build immutable objects, not realistically.
Without private members I can write something like:
class Fruit {
constructor(props = {}) {
Object.assign(this, props)
Object.freeze(this)
}
withSize(size) {
return new Fruit({...this, size})
}
}
Now I can do:
fruit = new Fruit({kind: 'apple'})
// Later:
saveFruitToDatabase(fruit.withSize(20))
This pattern works great, overall, though it isn't performant. With private members, you'd think you could declare them and also update objects using transformations. But private members don't work with Object.assign(), .entries() etc., and aren't introspectively at all. So you can't write withSize() to do partial updates without spelling out all the field names, every time.
Turns out it's quite hard to write robust, performant immutable code in JS. (Yes, I know immutable.js has Record. However, it's not compatible with getters and setters.)
I feel like the person who proposed this is relatively young and newer to coding. The reason why i say this is the functionality he is proposing can already be done using one of the myriad of JavaScript design patterns - https://addyosmani.com/resources/essentialjsdesignpatterns/b...
It can already be done in a clean and easy way with a an anonymous self executing function - (function(){//EVERYTHING HERE IS PRIVATE SCOPE})();
this also has massive added benefits to execution time and variable name resolutions.
Learn more about variable scopes and how powerful JavaScript is.
Also recognize that JavaScript is much closer to a functional programming language than a typical Object oriented language. If u are going to try and make JavaScript objects try and behave like classes that you know and love from java, you are gonna have a BAD time. its going to take way longer to code, gonna be hard to maintain and reason about it. Try instead to write asynchronous event driven code where functions pass data around rather then data passing functions around.
Also it is really dumb trying to set private/public fields in JavaScript code that runs in the browser, i can still easily access all the variables through a debugger or running in a custom environment( like Selenium).
and if your concern is about keeping that data safe in a server environment.. well then your really shouldn't rely on private/public to keep you safe and put in actual user role management and access at the application level, and one of the best way to do that is utilize a graph database to store and resolve complex user roles and permissions in your application.
Making classes (or equivalent) in the way you propose - ie "make use of closures for private instance fields" - does not work well. It requires allocating a new copy of each of the class's methods every time the class is instantisted, working around the prototype system.
If we're arguing about crappy misuse/abuse of JS and "doing it wrong", surely this is far worse?
I would be interesting to see exactly how much stuff is copied by the various implementations in such cases.
There are obviously some stack or other data frames that have to be captured somehow, but the code itself should in principle be something the VM makes a single copy of with references to in each object using it as a property. (Internally within a Function type reference, separate trackers for the instructions vs the bound data)
Normally it could be effectively optimised away and just treated as an extra argument.
But it's also required by the spec that `foo().method !== foo().method` when returning new functions from a closure in `foo`, so the function has to be wrapped and a new structure allocated each time to differentiate.
Thanks. I suspect the internals of “===“ in this case could look for the presence of a closure data pointer on a Function or some such hack.
Anyway, I’d probably use an actual prototype on something (with “methods” and) hundreds of instances, but otherwise, I’m not too worried about just using closures and object literals.
he has some test code at the end that compares the speed of resolving function/variable names in the local scope compared to going up the prototype chain. in his case its about 8 times faster in the local scope, but does in fact require a local copy of the variable.
I initially had to learn and understand JS variable resolution when i was writing a server to process Google Analytics data from our customers accounts to get some valuable business insides from the data.
The beauty of a self terminating function is that it creates a scope that is not even related to the global scope, if the variable name is not found in that scope it does not start going up scopes, and the scope itself is very clean and not polluted (unless you pollute it yourself) so its faster to resolve your variable names, vs an object with a long prototype chain. keep in mind that in the article above the prototype chain would get slower and slower the more methods and variables you added.
Just using a self invoking function as a wrapper for my array addition sped up my code 7 times. along with a bunch of other variable lookup optimizations i was able to process a GB of data per second on my T2 micro aws server with 1 GB of memory.
In the modern day era of computing i am also very confident making trade-offs for using more memory via copying that function to be closer in memory when i need it.
Also keep in mind that when it comes to low level cache hardware (like your CPU cache) its going to take advantage of the function actually being close in memory to the object, as well as probably being accessed when the variable is accessed ( taking advantages of temporal and spacial locality) when the object you are trying to use gets loaded into memory; it's likely to also load the function into the cache with it, and then you don't have to wait on all the cache misses as it traverses up the prototype tree.
OFCOURSE there will be some case somewhere where that object function for some reason takes up a huge amount of memory and its more efficient to store it in the prototype, but that will be highly unlikely.
But this kind of design pattern:
var collection = (function() {
// private members
var objects = [];
// public members
return {
addObject: function(object) {
objects.push(object);
},
removeObject: function(object) {
var index = objects.indexOf(object);
if (index >= 0) {
objects.splice(index, 1);
}
},
getObjects: function() {
return JSON.parse(JSON.stringify(objects));
}
};
})();
Is called a module pattern and is used A LOT... like really A LOT in javascript, most of npm packages are wrapped like this for example, because it gives them a blank scope and they don't have to worry for what lives outside that function.
This is a tried and tested pattern. I didn't make this up myself lol
That's why i initially suggested that the developer is probably younger and less experienced, if you work with JS for the last 5 years you are almost guaranteed to run into this.
Also to clarify something, it technically wouldn't be a closure since -
A closure is an inner scope that has a references to its outer scope.
It happens any time a function is created (technically, even global). That actually even goes for “IIF(Instantly Invoked Functions)”, since the function is created first (has references to its outer scope) and THEN is called (two separate “events” in js). The call is not part of the function decl/expr and therefore comes next. Between the two there’s a gap and that’s where you’d say there’s a closure. After that call the reference to the function ceases to exist and the garbage collection tears down the function and with that the closure too.
Bear in mind, in the cases you refer to, the inner functions should only be allocated once - when the file is loaded.
The issue is when you use this pattern for initialising what are effectively classes. Then every time you use `new`, the functions have to be wrapped again.
Again you are saying the "inner functions should only be allocated once", there is no rule or book in CS that states that as the absolute truth. It is however a concept that would be taught in a typical OO programming class.
Idk where exactly you were taught this, or if you did not read my whole speil about that being memory/cpu tradeoff.
Private fields in ECMAScript now look like commented out class attributes in Python. Cumbersome for someone who unfortunately has to program in both (guess which of those two makes it unfortunate :D)
How does this work with JSON.stringify? Could I convert an object to a string and then scan the string to get the value of the private fields? (I am not great at the details of JS correct me here)
Looks like JSON.stringify will not reveal a private field, based on this line from the spec:
"What do you mean by "encapsulation" / "hard private"? It means that private fields are purely internal: no JS code outside of a class can detect or affect the existence, name, or value of any private field of instances of said class.."
It makes me wonder what would happen if they tried `counter[“#value”]`. Hopefully still an error, but also hopefully not SyntaxError, because it certainly isn’t one.
Am I the only person who's really stinking annoyed that Chromium is just randomly shipping features that haven't gone through the W3C? Didn't we learn anything from the early days of CSS and browser flags? Didn't we learn anything from flexbox?
Don't call it a proposal if you already have plans to ship it. Private fields haven't been accepted as a standard yet. They should not be shipped except behind a browser flag.
First, the definition of JS has nothing to do with W3C. (Also for HTML, nobody really cares what W3C thinks anyway)
Second, the rules of how JS is standardised is that changes are actually not allowed to go into the final spec until at least two major players implement support. So this is _exactly_ how it's meant to be done.
A substantial amount of discussion and consideration has already happened prior to this point, as well as implementations into transpilers (eg Babel) which people have been using for some time.
Implementing support is not the same thing as turning a feature on by default.
The TC39 process states that limited spec changes may still occur after a stage 3 proposal. Until a proposal reaches stage 4, it should be hidden behind a settings flag. This helps us prevent situations like flexbox, where users code against a specification and find out later that their code has broken in subtle ways.
> as well as implementations into trans pourers (eg Babel)
Which is meaningless. Babel is not a part of the standards process, they're free to ship anything they want. Encouraged, in fact, because that provides more in-the-wild usage data.
It is super meaningful - it helps identify potential issues in real world use-cases and it creates effectively a "market" of people already using the feature who want to be able to use it without transpiling. It's also pretty rare that the implementation changes meaningfully after implementation into Babel from around stage 2.
I should have used a more descriptive term than meaningless.
Transpiler implementations do not count as hard restrictions on whether or not a proposal can change in the future. They're allowed to do whatever they want -- the fact that they create demand is very useful, but should not be seen as a restriction on whether or not the standard can evolve past that point.
"It's also pretty rare" is exactly the problem. They can change. When browsers ship a non-standardized behavior that's on by default, they are effectively cementing it. It is very hard to correct a broken implementation that is live, in the wild, on normal sites. The fact that these changes are rare doesn't make it less important to follow the established process. Stage 3 is one last safety check before the feature gets turned on for everyone.
What is the point of having a stage 3, if browser makers treat it as equivalent to stage 4?
Fair point; I get mixed up, since both organizations are pretty active and proposals are often submitted to both. Regardless, even in this case this proposal is being submitted to TC39[0]. It's in stage 3 of 4[1].
Stage 3 proposals should not be shipped on by default. There still may be spec changes between a stage 3 and stage 4 proposal.
When you're building a component that will be used as part of larger applications, possibly written by other people, declaring things as private lets you cordon off internal parts of the component so that it's safer to change them in the future without breaking the applications using it
Someone trying to make an application work that is looking at your classes in a debugger is going to poke at whatever inside details they can see that helps them get their job done quicker. Now if you change that their application breaks when they update. Not good for anyone.
Privates make it easier for component developers to get things done without worrying about breaking users, and makes it easier for users to upgrade their components without it breaking as much
Not gonna mince words. The hashtag syntax is hot garbage. If you need this kind of functionality just use a super set language like TypeScript. It looks and behaves exactly like you would expect.
Although TypeScript is intended to be a superset of JS, it isn't always. There have been occasions where JS features were not usable via TS due to existing incompatibles/differences in implementation of new JS syntax. Really it is a different language.
The justification is that they needed a unique accessor, because:
- attempting to access a non-existent private property using `this.prop` would create a public property "prop" in JS, which they claim would be a source of bugs.
- they also wanted to allow creating a public property on a subclass with the same name as a private property on a parent class—having the accessors the same could be confusing.
That said, they could still have had keywords for definition with an alternative accessor. # is definitely not idiomatic; it's extremely ugly.
I am not sure it is possible any longer for JS to avoid getting larded up with a bunch of features from other languages, due to the popularity it has achieved. But one of the things I used to like about it was its relatively constrained feature set. (I would almost say simplicity, but some of the wackier aspects of the language prevent me from going that far.) Now we have things I’m not sure anyone even uses like Symbols.
(Please don’t tell me I can just constrain the code I write to the subset of features I care about. We all read more code than we write.)
To reiterate what others said, choosing an unusual syntax that is non-obvious motivated by a synthetic example to solve a problem already solved by many other languages in better ways almost raises the suspicion that we’re deliberately trying to make JS even worse... to kill it? Or is this just indicative of the process that got us to this mess in the first place?
A more reasonable policy for JS would be to (a) contain it - no more changes, no more extending its use outside of its current surface, (b) the development of an alternative approach that would - crucially - support competing languages so that we don’t get stuck in this way again, (c) gradually shrinking eliminating JS with a goal to complete retirement.
The second paragraph you wrote seems to indicate that you believe people only use JS because they're forced to - since browsers don't support anything else.
That's not the case. People choose to use JS in all sorts of places where they could make use of many other options.
Development of the JS spec is complicated somewhat by the strict rules of web compatibility. But imo the benefits of that compatibility guarantee far outweigh the downsides. Having slightly ugly/unusual syntax is not a major issue.
You’re right, I have assumed that people only use JS because they have to. Happy to be convinced otherwise - please respond:
Other places - I guess you mean Node, and VS Code and Electron etc? I’ve always assumed that was either because of the available skill set (web devs know JS, so make their tools on JS) or to reuse JS on the web and elsewhere. So not because it’s a good language, but because people know it or solutions exists in it.
Is that what you meant?
To me, Node (for example) makes sense only if your devs don’t know other languages or you want to hire cheaply - I’d always go for a “proper” (better designed, more stable, elegant, robust) language on the backend given the choice.
I am not - and have never been - a front-end or browser JS developer. I did not have JS skills to bring over, and yet Node was my preferred language for writing commercial applications for many years. Now more on the Go side, but still making plenty of Node applications.
It works fantastically well in IO-bound applications for such a large number of reasons. It's not far off the most lightweight option for making simple systems, the ecosystem focusses very much on composition of smaller libs than using hefty frameworks, its single threaded nature is actually a _huge_ benefit (though you wouldn't want to use it for CPU-bound work).
There's also the fact that JS these days is essentially a completely different language from the JS which browsers supported 5 years ago. I started using Node before those changes, but it has only become better with them.
I think you'd be surprised how common this exact story is. A lot of Node engineers are gradually moving over to Go, but it's still a great system in its own right.
Also, in reality, if you're a pre-existing browser JS developer, you're actually going to be bringing a load of baggage which will not help you get used to Node. I'd almost say you're better moving to it if you don't already know the lang.
Frankly, if this is an issue that you are concerned about, you should really invest the time to learn (and possibly migrate to) TypeScript. Specifically, see their class documentation which already has support for `public`, `private`, and `protected` [1]. (Edit: read @denisw's reply regarding TS. I make this suggestion specifically for TS's compile-time assistance regarding private field access.)
It actually kind of boggles my mind that V8 engineers sat down and agreed that this would be a good design decision. Granted, the article says that they are proposals—is there any way to provide feedback on this? (Edit: https://github.com/tc39/proposal-class-fields/blob/master/PR...)
[1] https://www.typescriptlang.org/docs/handbook/classes.html
This means TypeScript's "private" properties also shows up in the return value of Object.keys() etc. So it‘s a pretty leaky abstraction.
This proposal, on the other hand, tries to solve the much harder problem of of bringing true private instance variables to JS. This means making it impossible to access these variables from outside the project. To do this, they had to introduce a complete new "private namespace" that is different from the `this.<name>` one, because the latter is already reserved for the visible-to-everyone properties (messing with that would have serious compatibility implications). Hence the `#foo` syntax.
That said, I still don't like the #. I think that will be the biggest issue with adoption (at least based on feedback on this post). Would the spec have to come up with another symbol if they wanted to implement `protected` behavior?
Alternatively: `this.private.value`, `definitelyNotThis.value`
This change is not about visibility in dev tools. I'd imagine you'll still be able to see and mutate private fields in dev console, just like you can in a Java debugger, etc.
This change is for libraries, so that if someone consumes a TS library, for example, they can't monkey patch or otherwise touch private fields at runtime.
There is good reason to talk about adding some kind of private or controlled access in Javascript beyond closures. I've often wanted exactly that kind of feature. However, it shouldn't be done through the lens of classes. Classes are largely just syntactic sugar over Javascript's real inheritance model, the prototype chain.
Whatever solution is come up with should be something that can described in terms of the prototype chain (or ideally, described in terms of pure objects). That's not to say it shouldn't work with classes, obviously it should. But it should be designed from the bottom up, not the top down.
The fact that this proposal is only applied to classes, and the fact that it does not mention the underlying mechanics about how this would work in the core language underneath classes makes it feel poorly designed. Classes in Javascript aren't magic, we can't just apply an entirely new concept on top of them with no explanation of how it fits into the broader language.
I'm really surprised something like this came out of Google.
Second, the example affords an opportunity to rant about a pet peeve of mine.
"Now ask yourself, how would you implement this class in JavaScript?"
I wouldn't. Listen folks: unless you're doing some weird/temporary debugging or you need to add some meta-functionality (e.g. logging) to an API that reeallly can't change, don't write getters/setters. If accessing or setting a property needs to do computation or have side effects, refactor so that it's done explicitly by calling a method. Anything that lets you attach invisible effects to normal syntax is an anti-pattern because it makes your code behave differently than it appears to, and therefore it's harder to reason about from the outside.
foo.bar = 0; foo.baz = 0;
One of these gets compiled/JIT-ed into a few assembly instructions and the other calls an O(n^2) function (with side effects) on a giant tree structure that foo is a part of. You can't guess which is which from looking at those, unless you've read the source for the foo class and memorized which properties are behind setters. They're "invisible" because you can't see when you're using one.
A good property for code to have is that you can infer its semantics from its syntax with as little outside information as possible. When you read a line of code, you should get reliable clues about what that makes the computer do. You might not need to know /all/ the details - e.g. a method call encapsulates the details of an operation but the name (and documentation) give you a good idea of what's happening.
The problem with getters and setters is that they do more than abstract away details, they cloak potentially important semantic information about code in ways that can be misleading.
By contrast, if I read:
foo.setBar(0); foo.baz = 0;
I might not know all the details about what setBar does, but at least I have a clue that it's more complicated than the following line.
I've been burned IRL by this kind of thing - I've seen very expensive getters mysteriously ruin hot loops (making what looked like a linear operation into an O(n^3) operation) and developers pulling their hair out because of setters that silently rejected values and all sorts associated headaches. Working in settings where they came up semi-frequently made me feel like complexity could be hidden anywhere, and reduced my confidence that I understood code that I read.
JS and other languages with reflection have a whole host of associated issues. If a field gets refactored into a getter, it's suddenly not enumerable and will silently fall out of a lot of copy operations. Suppose you want to find every place that the setter gets called, so you grep for `.bar = ` but did you remember the myriad of other ways that setter could get called (e.g. foo["bar"] =, foo[someVariable] =, Object.assign, etc.)?
In my experience, unless you have some extremely strong reason why you can't refactor or some very well-defined standards about when to use them, they're not worth the trouble they bring when code suddenly stops behaving the way you expect it to.
E.g. maybe you had a class 'Box' with a field 'area'. Later you added fields 'width' and 'height'. What are you going to do? Change 'area' to 'getArea()' and break the API? Write extra code to ensure that you update area every time you change width or height? Or write a getter that returns width*height?
I'd say getter is the cleanest option in many cases.
And yeah, if you radically rework how you make your boxes work, that's a good time to do a refactor. But there's no reason to anticipatorily add in a bunch of complexity so that you don't have to convert a value to a property at some hypothetical future point.
(I mean, among other things, if you put your area property behind get/set methods, you still have to go and change all the setters anyway, since in your brave new width 'n' height world, setting area is nonsensical. So your API still breaks, and you still need to change your consuming code. What was the benefit of the getters/setters then?)
And the area example use-case holds if it was a read-only property.
* Given a reference to a data entity, invoking a getter on the data should always return the same result, no matter how many times it is invoked, no matter how the rest of the program or world state evolve.
There are others situations. In those situations explicit code is preferable:
* Change an attribute of a data. Do not use a setter, period. Create a data copy with the attribute changed instead. Even GUI toolkits nowadays are using immutable data patterns with great success, see React.
* Use a data entity as a facade for reading data from a 3rd party API. Instead of using an object that pretends to be data but isn't via a heavy getter, use the 3rd party API explicitly to instantiate your immutable data graph.
* Use a data entity as a facade for updating a 3rd party API. Don't use a setter, instead create an updated copy of your data, then explicitly invoke the 3rd party API with the updated data copy.
Not breaking an API is one of the few cases where they make sense, but it's sort of a hack that you have to settle for because of back-compatibility. But even then, you've already broken the API in a lot of cases. In your example, 'area' was a field you could read and potentially set. But by changing it to a derived value, code that sets it no longer makes any sense - what do you do with width and height if somebody sets the area? The underlying model changed in a way that broke the old API either way. So yeah, just change 'area' to 'getArea()'.
Unless I change width and height all the time, but only read the area very rarely: of course, and gladly so, since I much prefer that to fetching two values and making a multiplication for what could be a simple fetch instead. And don't get me started on code that uses a getter on a value that doesn't change in a loop. I don't find that stuff "clean" at all. Don't just ask how it looks in the editor, ask what it makes the machine do.
In JavaScript, there's no assurance that the object you've got even means 'area' to be a number. It could be a string indicating which warehouse the Box is in.
So yes, you bloody well bump the major version number and the consumer needs to check the errata. Pretending to maintain compatibility when you're actually messing with the meaning of things is not something which should be done silently. What happens when someone calls the 'area' setter? Does it trust the width or the height...?
Firstly objects in JavaScript are not classes, they are just your good old dictionary value store, to an JavaScript object it doesn't matter if u put a function or a value under its property, under the hood its just a pointer pointing to a memory location. infact you can dynamically change it!
Also if this code is running in the browser you get ZERO security benefits, i can still read your values through the debugger.
But it does impart some very significant slow down, especially if you are calling the getter a LOT.
With Box.area; it literally has to resolve the BOX variable name and jump 1 pointer. with BOX.getArea(); it has to create a new scope and stack for the function call and there is SO MUCH MORE going on behind the scenes.
The right solution here is absolutely to update your area as the width and height change.
please learn functional event driven programming if you are going to give advice on javascript. most of the stuff you learned and got tested on in your HS and College courses about how to properly write object oriented code does not apply to javaScript. ESPECIALLY if performance is vital, which it often is since you interact with user input.
I use strict FP languages every day and could not figure out what your point was, and your condescending intro and outro makes it hard to continue the discussion with you.
Debugging is also harder. When you're debugging and you find an incrementer somewhere else in your program (maybe somewhere unexpected), how do you know what kind of thing you have? All you can tell is that it's got a value and an increment method, so you've got to do a bunch of grepping over your source to find where it comes from. If it were a class instance you could ask instance.constructor.name to find out that it's an Incrementer, so you can grep for `class Incrementer`.
Yes, lambda can do everything, which is really cool to learn! But lambda isn't the ultimate. More specialized tools often carry benefits.
In the closure/factory created version it is false - each function `value` is a different instance with a different reference. That will involve unnecessary duplication in the interpreter, but also allows for the function to be optimised for its state (monomorphic etc) which can be significant when more than a simple example `x++` operation is involved.
So the class version does entail less memory usage, but the closure permits more JIT optimisation.
> If it were a class instance you could ask instance.constructor.name to find out that it's an Incrementer, so you can grep for `class Incrementer`.
Im not sure how much extra help that kind of thing is, we might instead grep for `symbol\s*=` to find where a symbol was assigned whether it had a .constructor.name property or not. I can see it is a general advantage of having more types, to have summary information besides the names.
He’s venting, not evangelizing.
And his solution is a lot shorter and easier to follow than the drawn out OOP style version, even if I would have put the 2 functions in the returned object literal on their own lines.
I mean OOP is very popular so maybe that's where you are getting that impression from? That said, you really could make the same argument for functional programming on HN. I see a _lot_ of commentary much like the parent comment that touts functional as the obviously better way to do things. It always kind of rubs me wrong. If you don't like OOP don't use it and vice versa. Personally I prefer functional programming and use it when I can but I don't think twice about switching it up to OOP if that's whats needed. Just think that it's important to bring all tools available to bear (<= spelling??) when you are attacking a problem. This job is hard enough without succumbing to dogmatic turf wars you know?
And that's not a bad thing. As long as JS is effectively mandatory for front end development, it should be approachable and familiar to the widest possible developer audience. I'm hoping for a WebAssembly future where you'll be able to pick your favorite esoteric functional language and have it transparently run in the browser. But until then, it's better for JS to look like C# than Haskell.
That said, considering how widespread JavaScript is at this point, I think it is in their best interest to cater to more than one programming style.
No! Stop! Bolting 6 different programming paradigms into a language doesn’t make it easier to learn. It makes it harder, because every programmer who learns JavaScript needs to learn all of the tools of the language so they can read the code written by others.
More language features / paradigms make JS less accessible. It is a terrible idea if your goal is to help people learn JS.
As a result, uou have thongs like:
- methods have no access to `this` on the instance of the class. Because they are defined on the prototype. As a result, you have to manually do a .bind(this) in your constructor for every method that needs access to `this`
- meanwhile, if you define a class field: `field = () => this.foo` (with an arrow function), then you have no problems accessing the instance. Because of different scoping rules, binding etc.
- and now there’s private fields. Which are just a hack to make them sorta kinda work
I think it's perfectly readable, and it requires no more language hacks that obfuscate the JavaScript underlying `class` syntax sugar.
My IDE can easily infer the properties of such objects, as well as display the JSDoc for such properties.
This reads like Google is now attempting to dictate the direction Javascript goes in, by implementing proposals before they've become standards, and shipping them in their browser. Given the dominance of the browser engine, it effectively sets the direction of the language, particularly if people start to use it (which they will).
This is feeling like IE6 all over again. We're getting non-standard implementations of standards, that will further encourage websites to be "Chrome only".
Is that like commenting before knowing how the TC-39 standardization process works? :P
The proposals are Stage 3. To reach Stage 4 and be complete, there have to be actual, multiple implementations shipping[1].
As the spec page notes, implementations are also almost done in both Firefox[2] and Safari[3].
[1] https://tc39.github.io/process-document/
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1499448
[3] https://bugs.webkit.org/show_bug.cgi?id=174212
You seem to be proposing a Catch-22 that would freeze JavaScript forever: no one should implement a feature before it is a standard, but completing the process to become a standard requires implementations to exist. Therefore, no new feature can ever be implemented or become part of the standard.
If Javascript is determined to become fully object oriented, then it might as well just adopt the tried and true concepts in C++/Java for encapsulation.
This just seems like a painful and less verbose way of forcing encapsulation.
I can appreciate Ruby and Python users might not be happy with this convention but this is going to end up in tears I think, maybe not PHP level but still...a mess!
In 2018 I expect that a language which in many other regards follows the C++ language syntax style (semi colon, curly braces, double equality, ...) follow also expectations like that private fields are declared with a private keyword and not some strange new syntax form.
Disappointed!
...at any rate, I was unimpressed with the sigil when I first saw it, and that hasn't changed. I've resigned myself to throwing it in the bin of JS features that I won't be using, like `with`, `eval` and `==`.
https://mathiasbynens.be/notes/reserved-keywords
Also, will there be a way to do protected?
Also: another poster added that private members should just be closed over in functions. Well, I don't find it productive to condemn classes as as a concept, but if JS already has classes being syntactic sugar for a type of methods, it makes sense that private members would be sugar for closed-over variables also.
Without private members I can write something like:
Now I can do: This pattern works great, overall, though it isn't performant. With private members, you'd think you could declare them and also update objects using transformations. But private members don't work with Object.assign(), .entries() etc., and aren't introspectively at all. So you can't write withSize() to do partial updates without spelling out all the field names, every time.Turns out it's quite hard to write robust, performant immutable code in JS. (Yes, I know immutable.js has Record. However, it's not compatible with getters and setters.)
I feel like the person who proposed this is relatively young and newer to coding. The reason why i say this is the functionality he is proposing can already be done using one of the myriad of JavaScript design patterns - https://addyosmani.com/resources/essentialjsdesignpatterns/b...
It can already be done in a clean and easy way with a an anonymous self executing function - (function(){//EVERYTHING HERE IS PRIVATE SCOPE})();
this also has massive added benefits to execution time and variable name resolutions.
Learn more about variable scopes and how powerful JavaScript is.
Also recognize that JavaScript is much closer to a functional programming language than a typical Object oriented language. If u are going to try and make JavaScript objects try and behave like classes that you know and love from java, you are gonna have a BAD time. its going to take way longer to code, gonna be hard to maintain and reason about it. Try instead to write asynchronous event driven code where functions pass data around rather then data passing functions around.
Also it is really dumb trying to set private/public fields in JavaScript code that runs in the browser, i can still easily access all the variables through a debugger or running in a custom environment( like Selenium).
and if your concern is about keeping that data safe in a server environment.. well then your really shouldn't rely on private/public to keep you safe and put in actual user role management and access at the application level, and one of the best way to do that is utilize a graph database to store and resolve complex user roles and permissions in your application.
If we're arguing about crappy misuse/abuse of JS and "doing it wrong", surely this is far worse?
There are obviously some stack or other data frames that have to be captured somehow, but the code itself should in principle be something the VM makes a single copy of with references to in each object using it as a property. (Internally within a Function type reference, separate trackers for the instructions vs the bound data)
But it's also required by the spec that `foo().method !== foo().method` when returning new functions from a closure in `foo`, so the function has to be wrapped and a new structure allocated each time to differentiate.
Anyway, I’d probably use an actual prototype on something (with “methods” and) hundreds of instances, but otherwise, I’m not too worried about just using closures and object literals.
BUT if you want to talk about its speed and efficiency here we go:
It very intentionally trades memory space for faster variable resolution.
plz read this article - https://www.toptal.com/javascript/javascript-prototypes-scop...
he has some test code at the end that compares the speed of resolving function/variable names in the local scope compared to going up the prototype chain. in his case its about 8 times faster in the local scope, but does in fact require a local copy of the variable.
I initially had to learn and understand JS variable resolution when i was writing a server to process Google Analytics data from our customers accounts to get some valuable business insides from the data.
The beauty of a self terminating function is that it creates a scope that is not even related to the global scope, if the variable name is not found in that scope it does not start going up scopes, and the scope itself is very clean and not polluted (unless you pollute it yourself) so its faster to resolve your variable names, vs an object with a long prototype chain. keep in mind that in the article above the prototype chain would get slower and slower the more methods and variables you added.
Just using a self invoking function as a wrapper for my array addition sped up my code 7 times. along with a bunch of other variable lookup optimizations i was able to process a GB of data per second on my T2 micro aws server with 1 GB of memory.
In the modern day era of computing i am also very confident making trade-offs for using more memory via copying that function to be closer in memory when i need it.
Also keep in mind that when it comes to low level cache hardware (like your CPU cache) its going to take advantage of the function actually being close in memory to the object, as well as probably being accessed when the variable is accessed ( taking advantages of temporal and spacial locality) when the object you are trying to use gets loaded into memory; it's likely to also load the function into the cache with it, and then you don't have to wait on all the cache misses as it traverses up the prototype tree.
OFCOURSE there will be some case somewhere where that object function for some reason takes up a huge amount of memory and its more efficient to store it in the prototype, but that will be highly unlikely.
But this kind of design pattern:
var collection = (function() { // private members var objects = [];
})();Is called a module pattern and is used A LOT... like really A LOT in javascript, most of npm packages are wrapped like this for example, because it gives them a blank scope and they don't have to worry for what lives outside that function.
https://medium.com/@tkssharma/javascript-module-pattern-b4b5... -- just read through this
This is a tried and tested pattern. I didn't make this up myself lol
That's why i initially suggested that the developer is probably younger and less experienced, if you work with JS for the last 5 years you are almost guaranteed to run into this.
A closure is an inner scope that has a references to its outer scope.
It happens any time a function is created (technically, even global). That actually even goes for “IIF(Instantly Invoked Functions)”, since the function is created first (has references to its outer scope) and THEN is called (two separate “events” in js). The call is not part of the function decl/expr and therefore comes next. Between the two there’s a gap and that’s where you’d say there’s a closure. After that call the reference to the function ceases to exist and the garbage collection tears down the function and with that the closure too.
The issue is when you use this pattern for initialising what are effectively classes. Then every time you use `new`, the functions have to be wrapped again.
Idk where exactly you were taught this, or if you did not read my whole speil about that being memory/cpu tradeoff.
i know i put up a wall of text, but if u have time to read one article on JS from all of this it is this article - https://medium.com/javascript-scene/common-misconceptions-ab...
"What do you mean by "encapsulation" / "hard private"? It means that private fields are purely internal: no JS code outside of a class can detect or affect the existence, name, or value of any private field of instances of said class.."
https://github.com/tc39/proposal-class-fields/blob/master/PR...
Don't call it a proposal if you already have plans to ship it. Private fields haven't been accepted as a standard yet. They should not be shipped except behind a browser flag.
Second, the rules of how JS is standardised is that changes are actually not allowed to go into the final spec until at least two major players implement support. So this is _exactly_ how it's meant to be done.
A substantial amount of discussion and consideration has already happened prior to this point, as well as implementations into transpilers (eg Babel) which people have been using for some time.
This comment explains a bit further: https://news.ycombinator.com/item?id=18676717
The TC39 process states that limited spec changes may still occur after a stage 3 proposal. Until a proposal reaches stage 4, it should be hidden behind a settings flag. This helps us prevent situations like flexbox, where users code against a specification and find out later that their code has broken in subtle ways.
> as well as implementations into trans pourers (eg Babel)
Which is meaningless. Babel is not a part of the standards process, they're free to ship anything they want. Encouraged, in fact, because that provides more in-the-wild usage data.
It is super meaningful - it helps identify potential issues in real world use-cases and it creates effectively a "market" of people already using the feature who want to be able to use it without transpiling. It's also pretty rare that the implementation changes meaningfully after implementation into Babel from around stage 2.
Transpiler implementations do not count as hard restrictions on whether or not a proposal can change in the future. They're allowed to do whatever they want -- the fact that they create demand is very useful, but should not be seen as a restriction on whether or not the standard can evolve past that point.
"It's also pretty rare" is exactly the problem. They can change. When browsers ship a non-standardized behavior that's on by default, they are effectively cementing it. It is very hard to correct a broken implementation that is live, in the wild, on normal sites. The fact that these changes are rare doesn't make it less important to follow the established process. Stage 3 is one last safety check before the feature gets turned on for everyone.
What is the point of having a stage 3, if browser makers treat it as equivalent to stage 4?
Good or bad, that is the governing body the browser makers honor.
Stage 3 proposals should not be shipped on by default. There still may be spec changes between a stage 3 and stage 4 proposal.
[0]: https://github.com/tc39/proposal-class-fields
[1]: https://tc39.github.io/process-document/
Someone trying to make an application work that is looking at your classes in a debugger is going to poke at whatever inside details they can see that helps them get their job done quicker. Now if you change that their application breaks when they update. Not good for anyone.
Privates make it easier for component developers to get things done without worrying about breaking users, and makes it easier for users to upgrade their components without it breaking as much
I mean, not every software problem has to be solved with software.
Was this # thing voted by the community or something?
- attempting to access a non-existent private property using `this.prop` would create a public property "prop" in JS, which they claim would be a source of bugs.
- they also wanted to allow creating a public property on a subclass with the same name as a private property on a parent class—having the accessors the same could be confusing.
That said, they could still have had keywords for definition with an alternative accessor. # is definitely not idiomatic; it's extremely ugly.
(Please don’t tell me I can just constrain the code I write to the subset of features I care about. We all read more code than we write.)
It’s not even ironic at this point.
https://news.ycombinator.com/newsguidelines.html
A more reasonable policy for JS would be to (a) contain it - no more changes, no more extending its use outside of its current surface, (b) the development of an alternative approach that would - crucially - support competing languages so that we don’t get stuck in this way again, (c) gradually shrinking eliminating JS with a goal to complete retirement.
The second paragraph you wrote seems to indicate that you believe people only use JS because they're forced to - since browsers don't support anything else.
That's not the case. People choose to use JS in all sorts of places where they could make use of many other options.
Development of the JS spec is complicated somewhat by the strict rules of web compatibility. But imo the benefits of that compatibility guarantee far outweigh the downsides. Having slightly ugly/unusual syntax is not a major issue.
Other places - I guess you mean Node, and VS Code and Electron etc? I’ve always assumed that was either because of the available skill set (web devs know JS, so make their tools on JS) or to reuse JS on the web and elsewhere. So not because it’s a good language, but because people know it or solutions exists in it.
Is that what you meant?
To me, Node (for example) makes sense only if your devs don’t know other languages or you want to hire cheaply - I’d always go for a “proper” (better designed, more stable, elegant, robust) language on the backend given the choice.
It works fantastically well in IO-bound applications for such a large number of reasons. It's not far off the most lightweight option for making simple systems, the ecosystem focusses very much on composition of smaller libs than using hefty frameworks, its single threaded nature is actually a _huge_ benefit (though you wouldn't want to use it for CPU-bound work).
There's also the fact that JS these days is essentially a completely different language from the JS which browsers supported 5 years ago. I started using Node before those changes, but it has only become better with them.
I think you'd be surprised how common this exact story is. A lot of Node engineers are gradually moving over to Go, but it's still a great system in its own right.
Also, in reality, if you're a pre-existing browser JS developer, you're actually going to be bringing a load of baggage which will not help you get used to Node. I'd almost say you're better moving to it if you don't already know the lang.