

Dunno how smart it was to download and run. Could have been a compromised dev account.


Dunno how smart it was to download and run. Could have been a compromised dev account.


When connected to your internal network, what is the results of:
nslookup sub.domain.tld AGH.IP.Address
This should respond authoritative with the IP you need to access NPM’s VIP IP address. If that is not the case, let us see your AGH configuration for your sub.domain.tld.
If that does return the correct IP, verify that it responds to https using curl on Linux or windows (replace curl with curl.exe)
curl -vvvI https://sub.domain.tld/
If this is not connecting or showing a cert error then there’s a misconfiguration on the NPM side. Screenshots of your site configuration for one of the sites would be helpful. The domain name should match sub.domain.tld (not your duckdns) and be bound to the let’s encrypt cert.


Instead of a default gateway you can configure just your VPN IP address to go to your gateway. You might also need DNS servers depending on your setup.
Example: ip route add 1.1.1.1/32 via 192.168.1.1 dev eth0
Note that without a script this may be flaky if you’re using DNS to resolve the VPN. It might be better to have a script that resolves the IP(s) of the VPN and then adds routes.
That being said, your VPN software is usually designed to install routes that have higher priority so that they will get used before the local network. One such way is by adding half-internet routes (0.0.0.0/1 and 128.0.0.0/1) which get preferred over the larger default route. If you run ip route once connected you may see those routes present.
While I’m not sure if it works in rootless, take a look at binhex/arch-delugevpn project which has scripts to set up a similar network isolation environment.
IMO as a developer this is a sane change. There’s no telling when the format of the first-party api key will change. They may switch from reference tokens to JWT tokens tomorrow. The validation should be using the token and seeing if it works.


Pretty sure it’s stock Cinnamon, but I do have extensions installed which could be screwing with things.


Right clicking the title bar of a window on Linux Mint, the menu appears but I can’t click it until I move the window away from it (the menu doesn’t close) and then it becomes responsive. I love Linux.


Was about to say it always had this but I guess it is a change to the people who were grandfathered in. I personally haven’t hit this limit but I only use it for a select few games that don’t run natively or well on Linux.


I think NetworkChuck has a good set of tutorial videos about self hosting. For the most part you can search for what you want to find info on and he probably had a video on it. E.g. Nginx: https://m.youtube.com/@NetworkChuck/search?query=Nginx


I think if you didn’t assign a tag on the Release Profile it applies to all series.


I do a lot of Architecting for my company and it’s often easier to have direct access to DNS to make quick changes rather than wait one or more days for an engineer to go change records. If this is just going to be a test environment perhaps you could delegate a subdomain of your current domain. E.g. Add NS records for test.example.com that point to the NS of the contractors hosted zone. This gives you control to tear it down (delete the NS records) but allows the contractor the ability to build the environment out.


I have never done RAID over USB, but have done various JBOD setups using SCSI. I think the general idea is that USB having such an easily disconnected connector plus the latency overhead on translating SATA to USB to SATA again means you have a higher chance of corruption. SCSI setups typically have connectors with locking mechanisms to prevent easy disconnection.
If eSATA is an option it might be better for the performance and it has a latching mechanism to prevent easy disconnection. You can get a 2-port eSATA PCI card for about 50 bucks.
Oh, and if you have a free PCI port, you could add internal SATA ports to mount the drives internally.


I know tailscale prefers being installed on every machine but not all of my machines are even capable of running custom code. I use a single tailscale router that published my internal network to tailscale and if the internet is down everything still works fine internally.


With TrueNas you can do it two ways: ISCSI disks that are mounted to the VMs or via NFS. With ISCSI you won’t have access to the data from the TrueNas side as the data will be stored as a volume file. With NFS you get the best of both worlds as you’ll be able to access the files via other TrueNas services like SMB/SFTP. I have my Jellyfin/Plex running via NFS and have few issues, though I’ve not tested it with large 4k/8k videos yet. I mostly run 1080p.


For a Stardew Valley type RPG, check out Little-Known Galaxy. I haven’t gotten far in its story but it’s been pretty fun.
For more traditional RPGs, I enjoyed Cross Code, though it is a bit grindy if you want to 100%.
Sea of Stars also has a great story.
For games with voice acting check out Kingdom Come Deliverance and its sequel.
The Forgotten City. More of an adventure game but has good story.


Only if you define it.
const that = this
This reminded me of this video: https://www.youtube.com/watch?v=IwTXCwqurNQ
Sega doesn’t care because it’s now owned by Sammy and Sammy wants the good PR from Sega.
+1 for Backblaze. They have a convenient backup software too that works great. I backup my parents laptop using it, and use their S3 storage for my NAS backups.


A popular EHR cloud service that we use has a developer portal where operations such as logging in or entering two-factor codes would take upwards of 2 minutes to process.
When I asked our rep about it they went “eh it’s normal”.
This same company designed a XML SOAP API where if you request too much data, it just returns a HTTP 200 with no content. No error message or formatted SOAP reply, just completely nonsensical response.
I hate this company but there’s literally very few choices in this space.


The rng mechanics are definitely frustrating for some but the game is way deeper. Getting to 46 rolls the credits but you are left with so many unanswered questions. Some people stop there and feel satisfied, but others are curious about the world.
My thoughts are to try to push through the initial frustration with rng on the drafting side. You’ll eventually find that there are Roguelite mechanics to help you along, and it will feel less rng-dependent.
They finally added the last bit of data to this Session store that broke the whole application. 16MB of data being read/written from store on every http request. 50% of all http request processing was handling the Session middleware.
I hate developers who don’t spend the very minimum to understand the environment they work in.