- cross-posted to:
- framework@lemmy.ml
- cross-posted to:
- framework@lemmy.ml
Ultra 7 155H with six P-cores, eight E-cores, and eight graphics cores; or an Ultra 7 165H with the same number of cores but marginally higher clock speeds.
WTF is Intel smoking with these naming schemes I can’t even understand what this means. Thank fuck AMD is an option.
Yeah because AMD has such great naming schemes…
The number behind Ultra is pretty much the same as with the i$x scheme. 3 is entry, 5 is mid range, 7 is high end, 9 is bad decision making.
The number after that kind of works like before. So higher number means more better. Probably with an extension for coming generation. Remember, the first i5s had 4 digit names as well, the fourth digit was prepended to indicate generations.
Thing is, there’s no really good naming scheme, because there are so many possible variants/dimensions. Base clock, turbo clock, TDP, P core count, E core count, PCIe lanes, socket, generation ,… How would you encode that in a readable name?
just concat: intel i7 11g4p8e128l420c520b
11 gen 4 pcore 8 ecore 128 lane 4.20ghz clock base 5.20ghz clock boost
letter between for readable. maybe not add lane if not change for same number of pcore and ecore
gskill do similar thing: F5-5200J3636C16GX2-FX5
5200 mhz unbuffered dimm 36-36-36 timing 1.20v 16g per module dual channel 2 module in kit
see here: https://www.gskill.com/faq/1502180912/DRAM-Memory
edit: also can put architecture with letter to indicate refresh, add suffix for apu and maybe tdp
can maybe use some letter for number: not that many different core number, make a=1pcore, b=2pcore, c=3pcore, … more than 26 pcore unlikely ever in consumer cpu. same for ecore maybe
You really think, that is more readable?
Yes, can see what different between cpu without go to intel page and read spec. Not only that cpu are different.
What mean readable to you?
For example being able to get a grasp of the rough performance from the have.
i5 10500 is faster than i5 10400. But is 6p4e better than 4p8e?
It’s illusionary to fit everything about a CPU into its name. What you’re proposing is essentially the entire value column of the spec sheet concatenated.
if 10500 mean 6p4e and 10400 mean 4p8e, which is faster depend on workload. so compare by that not good and that how currently is.
also if then 10900 is 12p0e, maybe not faster for gaming if game is single thread, so compare broken again. and also not good for mobile device that care about battery life. who tell you that?
and yes, basically that just most important or most compared spec concatenated. which describe the cpu, i think a name is supposed do that.
And how many people do you think could accurately, or even ballpark, estimate their workload? I couldn’t tell you, whether my workload would benefit from more e or p cores and by how much.
What you’re implying here is an illusion of accuracy. You want accurate numbers for something that you can’t really judge anyway. These numbers don’t mean anything to you, they just give you the illusion of knowing what’s going on. It’s the “close door” button in an elevator.
Readable is able to read quickly and easily. That name has too much information.
name supposed to describe thing. too much information not the problem. if you think too long, can shorten to just enough information that different cpu have different name. which what i did.
edit: also question was how to encode different cpu variant into name, so result require to include that information
That doesn’t make it readable. That makes it efficient.
Performance cores versus efficiency cores?
They have high power and low power cores. Borrowed the idea from “BIG.little” design from ARM.
I can’t even understand what this means
I think that’s the intent, and they fucking nailed it.
And AMD is following along with the stupid naming scheme in the next generation.
It’s the intent, like “high-end” car models, so you can’t distinguish them by features or age.
This is where I’d put my Framework laptop
IF THEYD SELL ME ONE
I know right. Instead of lowering the price. Maybe sell it outside us
This is all well and good, but what I really want is a Framework 2-in-1. That would be drool worthy.
I’m with you. A touchscreen is a must have feature for me.
marry me
Do you mean the tablet/PC combos?
Yeah, like the Surface Pro. Basically a tablet PC with a keyboard/trackpad attachment.
But with an actually tablet worthy screen section.
Ideally, you’d have an ARM CPU and a decent battery in the screen section, and a dedicated GPU plus proper battery in the bottom section.
I’d like one with an x86 CPU, but it would be nice to have ARM as an option.
deleted by creator
x86 2-in-1s already exist.
Oooooooh, I’d buy that.
The Core Ultra chips, like the Ryzen 7040-series chips, also include a neural processing unit (NPU) that can be used to accelerate some AI workloads. But both NPUs fall far short of the performance required for Recall and other locally accelerated AI features coming to Windows 11 24H2 later this year;
Why even waste the fucking space on the die then?
Now people want recall?
I sure as hell don’t, but it seems extra pointless when it can’t even run the workloads it was designed for.
I’m sure it still works in photoshop or whatever, just not the windows stuff.
Because the NPUs were designed and built and included long before Windows 11’s AI features were announced?
If I recall correctly, it typically takes about 4 years for a CPU to go from design to distribution.
Meteor Lake was taped out in May 2021 and launched in December 2023. Still much slower than the pace of LLM development, to be fair. It seems more like an “if you build it, they will come” approach. But that’s also how we got stuck with (for most consumer purposes) useless tensor cores on our GPUs. Does anyone even give a shit about raytracing/DLSS anymore?
It actually sounds like Microsoft is betraying Intel for Qualcomm, since their upcoming processor in the new Surface tablet is the only one that actually meets the requirements. So it looks like Microsoft doesn’t give two shits about supporting existing hardware either way.
Tensor cores can be used to play chess, generate images, do realistic text to speech, do noise cancellation, content-aware fill, etc.
They are only useless to you and other people with no imagination
Chess engines have outplayed humans for thirty years, and they didn’t need teraflops of computing power to do it.
Generative AI is actively harmful to the environment, slowing the phase-out of coal in the US and guzzling billions of gallons of water. It’s likely going to kill jobs and it’s already filling the internet and the academic world with garbage. It’s also likely a bubble that will burst before long, potentially bringing the economy down with it.
I’ll give you noise cancellation and text-to-speech, that’s pretty cool.
But personally, I’d rather have more CUDA cores.
That middle paragraph is very misleading. It’s Generative AI as a service that is actively harmful to the environment. Having a 15 W chip to do tasks like erasing objects from a photo is not any more harmful to the environment than a GPU that uses 15W. In fact, NPUs can be more efficient at some tasks than GPUs.
The problem is opening your phone/browser, and being able to call on demand GPT-4 to wake up a cluster of 128 Nvidia A100s operating at around 300-400W each. That’s 51.2 kW.
Now you can draw some positives and negatives from that figure, such as
- Given that an iPhone 15 Pro’s A17 has a thermal design power of 8 W, GPT-4 on the server is about 6400 more energy intensive than anything you can do on an iPhone. 10 seconds of GPT need a similar amount of energy to an iPhone 15 Pro operating flat out at maximum power for 18 hours. Now in those 10 seconds, OpenAI says they “handle multiple user queries simultaneously”, but still - we’re feeding the machine.
- 51.2 kW is also roughly how much power a large SUV needs to roll at constant speed on a motorway. Each of those large clusters uses a similar amount of energy to a single 7-seater SUV, but serving many users at the same time. Plus unlike cars, a large portion of their energy usage comes from renewables. So yes, I agree that it’s a significant impact but largely overrepresented and we have bigger fish to fry; personal transport is a way bigger issue.
I don’t need to outplay humans, I need to see the optimal line to analyze it. Chess is still not solved, so Leela Zero is still helpful because it’s giving better advice than older engines. Even Stockfish went neural network, but a smaller one that reads deeper. They still can’t tell us if the game from the start ends in a draw like checkers.
Killing jobs is good. It’s already freeing people from having to write things like promotional emails. Maybe they are sad they don’t have a job anymore, but unemployment if 4%, hardly difficult to get a different one. It’s not an important job anyway, I wouldn’t feel creative to write about a labor day sale or whatever
I’m so curious to see how a Qualcomm gambit plays out for Microsoft.
With the ethos at Qualcomm being support a chip for 1 year, then move on, I have trouble believing they’ll update the drivers for a major windows release
Google browbeat them for nearly 10 years, and then ended up going with the majority Samsung designed chip called Tensor just to compete against Apple in years of updates
NPUs existed before recall and have other uses apart from that.
Oh god there’s round corners?
its due to whoever paid for the R&D for the screen asked for rounded corners. Framework just took the design and retooled the connectors for their own use case, as its significantly cheaper than commissioning a entirely new panel.
Well time to not buy it. I have a pixel 7a and the rounded corners drive me nuts, not to mention the home punch idiocracy. The phone renders a rectangle show me a rectangle.
If you were actually hoping to buy one but the rounded corners are a dealbreaker, then you may be interested to know that the DIY edition lets you mix and match the older display with the newer motherboards. Looks like opting for the older display even saves you $130 on the purchase price.
I wanted the higher resolutuon display for my existing framework but just… Why???
because theres a (very) expensive R&D attached to ordering a custom screen. its why many companies on the market use the same panel when making monitors. (e.g for gen 1 woled monitors from LG, LG, Asus, MSI(?) and i think Acer used the same panel.
just “wanting a higher res screen” isnt something thats trivial to order for, especially since the FW13 uses a 3:2 screen , an aspect ratio usually used by tablets
Yes I know it’s an R&D thing I just dont get with the obsession with rounded screen corners.
But hole punch cameras and round corners that go to the edges are so much better than a small bezel with square content! /s
Drives me nuts as well.
I completely understand that there are people who want the smallest phone and laptop possible and will happily trade all kinds of things for that, including an obstructed display, but I definitely am not in that camp.
Could maybe instruct DE to cut off screen in edges to make rectangle again.
If you really want high res so bad.
Edit: on X use xrandr, on wayland more difficult and maybe not possible yet, found this: https://askubuntu.com/questions/1067018/change-screen-resolution-using-the-terminal-command-line-in-wayland-ubuntu-1
There’s two display options, 2256x1504 60Hz without rounded coners and 2880x1920 120Hz with rounded corners.
Specs are identical to the Surface Pro 11 and Framework said they are using an existing panel so they might be using the same panel, which makes it cheaper to develop since M$ would have paid for the development.
What critical information are people putting in the six missing pixels?
I feel like there’s more than 1.5 pixels per corner taken out.
The bottom corners aren’t rounded.
I read somewhere it was rounded 3mm at top and 1mm at bottom. Can’t find it now.
They don’t seem to list the exact specs of the display on their website (yet), but judging by this photo:
It looks like the bottom corners are pretty much identical to the ones on the previous display.
It was in the announcement vid, you’re correct
3 pixels in each corner? Why?
Why what?
Because it comes from a laptop with rounded corners on the top of the lid and a flat hinge on the bottom.
Rounded corners don’t bother me at all, but a notch sure as hell would.
I was on the Framework wait-list for over a year, but bailed because they didn’t kick this out in time.
I really hope they start shipping to Denmark soon. We’re such a tiny market we often get ignored or forgotten.
Fuck yeah was going to order a new one soon
deleted by creator
Realistically, the target audience are organizations as nowadays most business laptops are being carried between docking stations with the occasional meeting or air travel in-between and 13" is an excellent size to meet those needs.
When hooked to a docking station, the screen size and keyboard is entirely irrelevant and modern laptop performance is…honestly crazy good.
When in a meeting, it’s probably being either used to take notes fullscreen or show a presentation, so pretty neutral.
Finally, when traveling, you can really can feel the difference between a 13" and a 15" when you’re running on too short of a layover between flights.
You nailed it. I’m the target audience for this and that is exactly how I use my laptop. Now if only framework would add a touchscreen option and I’d buy it tomorrow.
I’m right there with you. As silly as it is, I absolutely love the touchscreen on my Lenovo. I could live without it, sure, I don’t wanna. Once framework supports it, I’m there.
13 is a good “on call”/travel size. It’s not big enough to do serious work on but in a pinch it’s definitely big enough to get something done. It’s more comfortable on a flight, you can toss it a fairly small bag and take it with you. It’s lighter but can still manage a reasonable size keyboard. And when I get to my house or my job I’m plugging into external mouse and keyboard anyway.
It’s not for everyone but my 13-in motherboard died about 2 months ago and I am definitely in the market. Now if I can just actually buy one of these we’ll see.
To each their own. I’ve personally got a 14” MBP (which is physically the same size as my 13” was, they just have smaller bezels) and work provides me a 16” MBP. The 16 is unwieldy, massive, heavy, too large on my lap, barely fits in my laptop bag, and is a general pain to lug around. Every time I use it I’m reminded of how much I’m glad I got the 14”instead. I feel like the 16 is the worst of both worlds. It’s too big to truly be a portable, machine, but too small to do real work on. Sometimes I’ll think “I wish I had more screen real estate” on the 14, but I do on the 16 as well, so it doesn’t really solve the issue while also being large and heavy.
In short, it depends on what you like, and what you need to do. Being an ultraportable is a big plus and there are monitors in most places I need more space anyway.
The actual screen volume is around that of a 14" 16:9 if that makes a difference
Everyone is different. My favorite computer is my 11" netbook because despite being slow it fits in any bag, fits in my side table, so light I can easily carry with one hand and not put undue pressure on my wrists, I can use most books as a lap desk, and I don’t have to clear off as much space on the table (I have two kids so it’s never clear).