Ahrin escribió:Lo malo no es la cagada en sí, qué ya ha habido otras, pero es que al menos el tema era recuperable.
El problema con esto es que no puedes hacer NADA, el servidor se reinicia, peta bsod al iniciar, reinicia, bsod… y hasta el infinito. Debes meter a un técnico a reparar esa máquina. Y podemos hablar de granjas de cientos o docenas.
A desmontar discos y reconstruir máquinas en Azure, un festín.
Ha hecho estragos a nivel de usuario, en APAC ha pegado fuerte por entrar el update en plena mañana y ha pillado a todos en bragas.
Hombre... el fix no requiere de mucho...
Workaround Steps for individual hosts:
Reboot the host to give it an opportunity to download the reverted channel file. If the host crashes again, then:
Boot Windows into Safe Mode or the Windows Recovery Environment
NOTE: Putting the host on a wired network (as opposed to WiFi) and using Safe Mode with Networking can help remediation.
Navigate to the %WINDIR%\System32\drivers\CrowdStrike directory
Locate the file matching “C-00000291*.sys”, and delete it.
Boot the host normally.
Note: Bitlocker-encrypted hosts may require a recovery key.
Workaround Steps for public cloud or similar environment including virtual:
Option 1:
Detach the operating system disk volume from the impacted virtual server
Create a snapshot or backup of the disk volume before proceeding further as a precaution against unintended changes
Attach/mount the volume to to a new virtual server
Navigate to the %WINDIR%\System32\drivers\CrowdStrike directory
Locate the file matching “C-00000291*.sys”, and delete it.
Detach the volume from the new virtual server
Reattach the fixed volume to the impacted virtual server
Option 2:
Roll back to a snapshot before 0409 UTC.