A fork/continuation of the original since the author has been away for a while. Supports kernels up to 6.15 with lots of other changes.
A fork/continuation of the original since the author has been away for a while. Supports kernels up to 6.15 with lots of other changes.
What in the fuck are you even talking about?
PS5 is bluetooth, standard bluetooth, and wired uses both standard HID and standard usb audio, my point is: Why isn’t MSFT?
Also, since you clearly don’t know the first fucking thing you’re talking about:
Look at how stupid broken this is! You need drivers to use it over standard USB!
Everything about this design is broken, it should be kicked out of the kernel and MSFT should release firmware that actually implements HID like normal, non-stupid people.
8bitdo has exactly this, same dongle system and pairing and again, it works perfectly without any drivers at all, because they’re not morons.
The 8bitdo version is easier to implement because it’s one dongle per controller. The Xbox dongle supports eight controllers per dongle. This complicates things; I assume they didn’t want to emulate an eight-port USB hub on the dongle.
You can use BT, but there’s a reason 8bitdo has a dongle as well: BT has worse latency, I assume due to protocol overhead.
And at least Xbox controllers are cross-compatible. You can’t use a DS4 on a PS5, even if you’re playing a PS4 game.
https://github.com/abcminiuser/lufa
Literally a kid did this in high school, this is without hardware support, just GPIO, but he also implemented the full stack on avrusbs and cortex-ms, and one thing he emulated was multiple devices on a hub.
Well, yeah, obviously it can be done. What’s the latency, though? A hub’s muxing alternates between packets from different devices, but even USB 1.1 has 64B packets, leaving 64b per controller if you report them all in one packet. That’s 15 digital buttons, 6b per axis, and 13b left over for routing.
However, I can’t think of a way to get the computer to decode one 64B packet into eight separate HID polls without a custom driver. If you use a hub, you’re limited to 8kHz total by the spec, but many EHCI controllers limit that to 1kHz. 125Hz per player is not great.
I can’t confirm that this is the reason or that there isn’t a different way around the restriction, but it seems likely from what I know of USB hubs.
TL;DR: with a custom driver, you can report all controllers on all USB polls rather than each taking up a whole interval, giving you 8x the polling rate compared to an emulated hub with 8 standard HIDs.
Firstly, it’s not a real hub, it’s an emulated hub, and you can do that emulating everything as USB 2.0.
Secondly you can have multiple hid interface endpoints on a single device.
Thirdly, you wouldn’t be polling, these would be hid interrupt urbs, and you can storm them 1 per micropacket if you want, they just show up in the ehci buffers.
Finally, no human is overflowing the hid interface like this, not even 8 of them.
So those polls are generally isochronous to the USB bus transaction state, not based on polling frequency of the CPU, what happens is:
USB interrupt URB comes in to HCI controller,.URB descriptor written to descriptor chain.
Controller adds to descriptor chain, once chain length > WAT (| Timeout), interrupt and start processing incoming URBs.
In interrupt controller, follow chain, push URBs onto usb stack queue, trigger handler tasklet
Stack processes URB, routes to proper class driver
Class driver checks if URB has file handle open (or has open ref from drivers like HID/input).
If so, poll or other input read() returns value.
Now it’s possible there are multi-input poll reads in games, and I’m doing linux of course.
For MSFT it’s URB -> IRP -> WDM filter driver stack -> kernel32/directinput or win32 input stack (WNDPROCs after routing).
In any of these cases, I’m struggling to see how interrupts would come in faster with the same code on PC.
See, the same code probably runs on both MSFT and normal hardware, so it’s going to have the same structure, unless you actually believe a dev team is optimizing input latency that much, that’s often the lowest priority, they’ll optimize video lag more because it’s more noticeable. The engines themselves use DirectInput, and that’s routed through to libinput in WINE, and the same for all devices.
Btw, DirectInput has a device-based interface, so it couldn’t poll like this anyway, basically each controller has its own input queue that is round-robin and pluck stuff out of their input stream when available.
In any case, you’re not getting the latency improvement, both because it’s so different in software and because nothing can appreciate this.
I’m not trying to be extra autistic for no reason, I’ve just had to make these decisions before, and these are how we have to think.
We are talking about a driver for the Xbox One wireless adapter.
Microsoft never submitted a kernel driver for this, it’s a third party module. It’s not Bluetooth - it’s WiFi, using a proprietary blob for authentication.
Not a single one of your claims in this entire thread have been correct.
You’ve literally missed my whole point.
Playstation link is this exact same thing, and btw, both controllers are dual-mode bluetooth and “high-speed wireless interface”, which is basically wifi or wifi-direct or some proprietary variant.
My point is, why isn’t it like Playstation link which just presents as HID devices and usb-audio devices, without a driver at all? Same low-latency, they even do LDAC.