Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> When it comes to the appearance of conflict, I am not an ideal owner of The Post. Every day, somewhere, some Amazon executive or Blue Origin executive or someone from the other philanthropies and companies I own or invest in is meeting with government officials. I once wrote that The Post is a “complexifier” for me. It is, but it turns out I’m also a complexifier for The Post. - https://archive.is/flIDl

It kind of shocks me how someone seemingly can understand those things, but then continue to try to helm the ship. You know you're having a negative impact, why stay at that point, unless you have some ulterior motive?

I don't feel like Washington Post becoming a shadow of itself is any surprise, when even the owner is aware of the effect they have on the publication, yet do absolutely nothing to try to change it.


....but if you pull the lever and let people talk about suicide on your platform, your platform will actively contribute to some unknowable number of suicides.

There is, at this time, no way to determine how the number it would contribute to would compare to the number it would prevent.


The "simple" (as distinct from "easy") option is that anything fast gets a dedicated route that's physically separated - maybe by elevation or such - from everything around it.

(author of PokerBattle is here)

Well, you're not wrong :) Vercel is not the one to blame here, it's my skill issue. Entire thing was vibecoded by me — product manager with no production dev experience. Not to promote vibecoding, but I couldn't do it myself the other way.


I don't know what kind of data you are dealing with but its illogical and against all best practices to have this many keys in a single object. it's equivalent to saying having tables with 65k columns is very common.

on the other hand most database decisions are about finding the sweet spot compromise tailored toward the common use case they are aiming for, but your comment sound like you are expecting a magic trick.


I'm not sure about Nextcloud, but the French government built a thing that looks pretty good: https://docs.numerique.gouv.fr/home/

If you want a full suite, the German government has been working on integrating and packaging a whole open source productivity stack: https://www.opendesk.eu/en


there are like 3 comments, what the fuck are you on about? You sound like a fucking bot

>= 6.0.0 <= 6.0.36 versions are not being fixed by Microsoft.

Fixes are available for .NET 6 from HeroDevs ongoing security support for .NET 6, called NES* for .NET.

*never ending support


They're about as smart as a person who's kind of decent at every field. If you're a pro, it's pretty clear when it's BSing. But if you're not, the answers are often close enough.

And just like humans, they can be very confidently wrong. When any person tells us something, we assume there's some degree of imperfection in their statements. If a nurse at a hospital tells you the doctor's office is 3 doors down on the right, most people will still look at the first and second doors to make sure those are wrong, then look at the nameplate on the third door to verify that it's right. If the doctor's name is Smith but the door says Stein, most people will pause and consider that maybe the nurse made a mistake. We might also consider that she's right, but the nameplate is wrong for whatever reason. So we verify that info by asking someone else, or going in and asking the doctor themselves.

As a programmer, I'll ask other devs for some guidance on topics. Some people can be absolute geniuses but still dispense completely wrong advice from time to time. But oftentimes they'll lead me generally in the right way, but I still need to use my own head to analyze whether it's correct and implement the final solution myself.

The way AI dispenses its advice is quite human. The big problem is it's harder to validate much of its info, and that's because we're using it alone in a room and not comparing it against anyone else's info.


Awesome. Yes, please do; or email me at johan@dlog.pro

I frequently get into this argument with people about how Postel's law is misguided. Being liberal in what you accept comes at _huge_ costs to the entire ecosystem and there are much better ways to design flexibility into protocols.

It might be because it wasn’t a technique as such. I don’t visualise not thinking, I just stop thinking about things, but I also don’t have a constant inner voice talking to/with me as I understand many people do.

These chains become easy to read and understand with a small language feature like the pipe operator (elixir) or threading macro (clojure) that takes the output of one line and injects it into the left or rightmost function parameter. For example: (Elixir) "go " |> String.duplicate(3) # "go go go " |> String.upcase() # "GO GO GO " |> String.replace_suffix(" ", "!") # "GO GO GO!"

(Clojure) ;; Nested function calls (map double (filter even? '(1 2 3 4)))

;; Using the thread-last macro (->> '(1 2 3 4) (filter even?) ; The list is passed as the last argument (map double)) ; The result of filter is passed as the last argument ;=> (4.0 8.0)

Things like this have been added to python via a library (Pipe) [1] and there is a proposal to add this to JavaScript [2]

1: https://pypi.org/project/pipe/ 2: https://github.com/tc39/proposal-pipeline-operator


Who sees Spain as a modern democracy? Only those who benefit from the rampant clientelism.

Humans are the benchmarks for AGI and yet a lot of people are outright dumb:

> Said one park ranger, “There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.”

[1] https://www.schneier.com/blog/archives/2006/08/security_is_a...


Also possible we get something "close enough" to AGI and it's really fucking useful.

AGI is the end-game. There's a lot of room between current LLMs and AGI.


I dont mean to question you personally, after all this is the internet, but comments like yours do make the reader think, if he has 5x'ed his coding, was he any good to begin with? I guess what I'm saying is, without knowing your baseline skill level, I dont know whether to be impressed by your story. Have you become a super-programmer, or is it just cleaning up stupid stuff that you shouldn't have been doing in the first place? If someone is already a clear-headed, efficient, experienced programmer, would that person be seeing anywhere near the benefits you have? Again, this isn't a slight on you personally, it's just, a reader doesnt really know how to place your experience into context.

Clearly a Kool-aid enjoyer


Maybe in a few decades, people will look back at how naive it was to talk about AGI at this point, just like the last few times since the 1960s whenever AI had a (perceived) breakthrough. It's always a few decades away.

Completely agree (https://news.ycombinator.com/item?id=45627451) - LLMs are like the human-understood output of a hypothetical AGI, 'we' haven't cracked the knowledge & reasoning 'general intelligence' piece yet, imo, the bit that would hypothetically come before the LLM, feeding the information to it to convey to the human. I think that's going to turn out to be a different piece of the puzzle.

Oh dear lord, GCP could be the intuitive one?! I have not used anything else but, dear lord, that's shocking and not at all surprising at the same time.

They perform similarly on benchmarks, which can be fudged to arbitrarily high numbers by just including the Q&A into the training data at a certain frequency or post-training on it. I have not been impressed with any of the DeepSeek models in real-world use.

I think AGI isn't the main thing. The agreement gives msft the right to develop their own foundation models, OpenAI to stop using Azure for running & training their foundation models. All this while msft still retains significant IP ownership.

In my opinion, whether AGI happens or not isn't the main point of this. It's the fact that OpenAI and MSFT can go their separate ways on infra & foundation models while still preserving MSFT's IP interests.


Internal attacks are easy enough in a large enough network.

unfortunately quite inefficient, I'm sure higher framerates must be possible

It’s odd to me in clouds you excluded AWS.

Leaks are intentional. There's a reason they always happen and always in the same way.

Humans are bad eye witnesses. We don’t like this so it’s easier to scour the world looking for evidence we were right all along. Combine this with how well we see patterns even when they aren’t there and you get ufos.

you know ChatGPT can't prescribe Adderall right?

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: