Nosomi escribió:f5inet escribió:...
Nosomi escribió:
Uhm se supone que tienen un hipervisor y dividen la virtualizacion en juego y la otra para aplicaciones esto permite el rapido cambio de juego a aplicacion, lo de los operativos en cada juego, me ha sorprendido algun enlace donde ver esto?. por que al tener el opartivo en cada disco deberia estar hecho como un live CD, y esto le pasaria factura al performance al menos que cada juego se instalara y creara su propia maquina virtual......
besos
A su vez tiene la contra de que para poder incluir mejoras de rendimiento en ese exclusive os, habria que actualizar las exclusive os juego por juego.
darksch escribió:A su vez tiene la contra de que para poder incluir mejoras de rendimiento en ese exclusive os, habria que actualizar las exclusive os juego por juego.
O sea, parchear el juego. Pero eso es lo normal, lo raro sería que por actualizar el OS de la máquina ahora los juegos funcionen distintos. Como se ha dicho eso de "distinto" no tiene porque ser a mejor, podría incluso romperse algo. En una máquina cerrada hay que asegurar un funcionamiento, que cuando el desarrollador lo compile, asegurarle que en todas las máquinas y por siempre se va a ejecutar igual.
Nosomi escribió:y esto le pasaria factura al performance al menos que cada juego se instalara y creara su propia maquina virtual......
Zokormazo escribió:Nosomi escribió:f5inet escribió:...
Mas que como un livecd, son como imagenes de maquinas en un entorno de maquinas virtuales. Y que estos se basen en un unica imagen o en distintas no tiene porque suponer ninguna diferencia de rendimiento.
Vamos, el hypervisor carga un exclusive os distinto por cada juego, en vez de cargar siempre el mismo. El unico pero en cuanto a rendimiento es el tener que cargar uno distinto cada vez que cambies de juego.
En otros aspectos si tiene pros y contras.
El modelo de una imagen SO por cada juego tiene el pro de evitar problemas de compatibilidad por actualizaciones del SO de exclusive OS.
A su vez tiene la contra de que para poder incluir mejoras de rendimiento en ese exclusive os, habria que actualizar las exclusive os juego por juego.
Digital Foundry: What were your takeaways from your Xbox 360 post-mortem and how did that shape what you wanted to achieve with the Xbox One architecture?
Nick Baker: It's hard to pick out a few aspects we can talk about here in a small amount of time. I think one of the key points... We took a few gambles last time around and one of them was to go with a multi-processor approach rather than go with a small number of high IPC [instructions per clock] power-hungry CPU cores. We took the approach of going more parallel with cores more optimised for power/performance area. That worked out pretty well... There are a few things we realised like off-loading audio, we had to tackle that, hence the investment in the audio block. We wanted to have a single chip from the start and get everything as close to memory as possible. Both the CPU and GPU - give everything low latency and high bandwidth - that was the key mantra.
Some obvious things we had to deal with - a new configuration of memory, we couldn't really pass pointers from CPU to GPU so we really wanted to address that, heading towards GPGPU, compute shaders. Compression, we invested a lot in that so hence some of the Move Engines, which deal with a lot of the compression there... A lot of focus on GPU capabilities in terms of how that worked. And then really how do you allow the system services to grow over time without impacting title compatibility. The first title of the generation - how do you ensure that that works on the last console ever built while we value-enhance the system-side capabilities.
Digital Foundry: You're running multiple systems in a single box, in a single processor. Was that one of the most significant challenges in designing the silicon?
Nick Baker: There was lot of bitty stuff to do. We had to make sure that the whole system was capable of virtualisation, making sure everything had page tables, the IO had everything associated with them. Virtualised interrupts.... It's a case of making sure the IP we integrated into the chip played well within the system. Andrew?
Andrew Goossen: I'll jump in on that one. Like Nick said there's a bunch of engineering that had to be done around the hardware but the software has also been a key aspect in the virtualisation. We had a number of requirements on the software side which go back to the hardware. To answer your question Richard, from the very beginning the virtualisation concept drove an awful lot of our design. We knew from the very beginning that we did want to have this notion of this rich environment that could be running concurrently with the title. It was very important for us based on what we learned with the Xbox 360 that we go and construct this system that would disturb the title - the game - in the least bit possible and so to give as varnished an experience on the game side as possible but also to innovate on either side of that virtual machine boundary.
We can do things like update the operating system on the system side of things while retaining very good compatibility with the portion running on the titles, so we're not breaking back-compat with titles because titles have their own entire operating system that ships with the game. Conversely it also allows us to innovate to a great extent on the title side as well. With the architecture, from SDK to SDK release as an example we can completely rewrite our operating system memory manager for both the CPU and the GPU, which is not something you can do without virtualisation. It drove a number of key areas... Nick talked about the page tables. Some of the new things we have done - the GPU does have two layers of page tables for virtualisation. I think this is actually the first big consumer application of a GPU that's running virtualised. We wanted virtualisation to have that isolation, that performance. But we could not go and impact performance on the title.
We constructed virtualisation in such a way that it doesn't have any overhead cost for graphics other than for interrupts. We've contrived to do everything we can to avoid interrupts... We only do two per frame. We had to make significant changes in the hardware and the software to accomplish this. We have hardware overlays where we give two layers to the title and one layer to the system and the title can render completely asynchronously and have them presented completely asynchronously to what's going on system-side.
System-side it's all integrated with the Windows desktop manager but the title can be updating even if there's a glitch - like the scheduler on the Windows system side going slower... we did an awful lot of work on the virtualisation aspect to drive that and you'll also find that running multiple system drove a lot of our other systems. We knew we wanted to be 8GB and that drove a lot of the design around our memory system as well.
eloskuro escribió:Digital Foundry: You're running multiple systems in a single box, in a single processor. Was that one of the most significant challenges in designing the silicon?
Nick Baker: There was lot of bitty stuff to do. We had to make sure that the whole system was capable of virtualisation, making sure everything had page tables, the IO had everything associated with them. Virtualised interrupts.... It's a case of making sure the IP we integrated into the chip played well within the system. Andrew?
Andrew Goossen: I'll jump in on that one. Like Nick said there's a bunch of engineering that had to be done around the hardware but the software has also been a key aspect in the virtualisation. We had a number of requirements on the software side which go back to the hardware. To answer your question Richard, from the very beginning the virtualisation concept drove an awful lot of our design. We knew from the very beginning that we did want to have this notion of this rich environment that could be running concurrently with the title. It was very important for us based on what we learned with the Xbox 360 that we go and construct this system that would disturb the title - the game - in the least bit possible and so to give as varnished an experience on the game side as possible but also to innovate on either side of that virtual machine boundary.
We can do things like update the operating system on the system side of things while retaining very good compatibility with the portion running on the titles, so we're not breaking back-compat with titles because titles have their own entire operating system that ships with the game. Conversely it also allows us to innovate to a great extent on the title side as well. With the architecture, from SDK to SDK release as an example we can completely rewrite our operating system memory manager for both the CPU and the GPU, which is not something you can do without virtualisation. It drove a number of key areas... Nick talked about the page tables. Some of the new things we have done - the GPU does have two layers of page tables for virtualisation. I think this is actually the first big consumer application of a GPU that's running virtualised. We wanted virtualisation to have that isolation, that performance. But we could not go and impact performance on the title.
We constructed virtualisation in such a way that it doesn't have any overhead cost for graphics other than for interrupts. We've contrived to do everything we can to avoid interrupts... We only do two per frame. We had to make significant changes in the hardware and the software to accomplish this. We have hardware overlays where we give two layers to the title and one layer to the system and the title can render completely asynchronously and have them presented completely asynchronously to what's going on system-side.
System-side it's all integrated with the Windows desktop manager but the title can be updating even if there's a glitch - like the scheduler on the Windows system side going slower... we did an awful lot of work on the virtualisation aspect to drive that and you'll also find that running multiple system drove a lot of our other systems. We knew we wanted to be 8GB and that drove a lot of the design around our memory system as well.
eloskuro escribió:Digital Foundry: What were your takeaways from your Xbox 360 post-mortem and how did that shape what you wanted to achieve with the Xbox One architecture?
Nick Baker: It's hard to pick out a few aspects we can talk about here in a small amount of time. I think one of the key points... We took a few gambles last time around and one of them was to go with a multi-processor approach rather than go with a small number of high IPC [instructions per clock] power-hungry CPU cores. We took the approach of going more parallel with cores more optimised for power/performance area. That worked out pretty well... There are a few things we realised like off-loading audio, we had to tackle that, hence the investment in the audio block. We wanted to have a single chip from the start and get everything as close to memory as possible. Both the CPU and GPU - give everything low latency and high bandwidth - that was the key mantra.
Some obvious things we had to deal with - a new configuration of memory, we couldn't really pass pointers from CPU to GPU so we really wanted to address that, heading towards GPGPU, compute shaders. Compression, we invested a lot in that so hence some of the Move Engines, which deal with a lot of the compression there... A lot of focus on GPU capabilities in terms of how that worked. And then really how do you allow the system services to grow over time without impacting title compatibility. The first title of the generation - how do you ensure that that works on the last console ever built while we value-enhance the system-side capabilities.
Digital Foundry: You're running multiple systems in a single box, in a single processor. Was that one of the most significant challenges in designing the silicon?
Nick Baker: There was lot of bitty stuff to do. We had to make sure that the whole system was capable of virtualisation, making sure everything had page tables, the IO had everything associated with them. Virtualised interrupts.... It's a case of making sure the IP we integrated into the chip played well within the system. Andrew?
Andrew Goossen: I'll jump in on that one. Like Nick said there's a bunch of engineering that had to be done around the hardware but the software has also been a key aspect in the virtualisation. We had a number of requirements on the software side which go back to the hardware. To answer your question Richard, from the very beginning the virtualisation concept drove an awful lot of our design. We knew from the very beginning that we did want to have this notion of this rich environment that could be running concurrently with the title. It was very important for us based on what we learned with the Xbox 360 that we go and construct this system that would disturb the title - the game - in the least bit possible and so to give as varnished an experience on the game side as possible but also to innovate on either side of that virtual machine boundary.
We can do things like update the operating system on the system side of things while retaining very good compatibility with the portion running on the titles, so we're not breaking back-compat with titles because titles have their own entire operating system that ships with the game. Conversely it also allows us to innovate to a great extent on the title side as well. With the architecture, from SDK to SDK release as an example we can completely rewrite our operating system memory manager for both the CPU and the GPU, which is not something you can do without virtualisation. It drove a number of key areas... Nick talked about the page tables. Some of the new things we have done - the GPU does have two layers of page tables for virtualisation. I think this is actually the first big consumer application of a GPU that's running virtualised. We wanted virtualisation to have that isolation, that performance. But we could not go and impact performance on the title.
We constructed virtualisation in such a way that it doesn't have any overhead cost for graphics other than for interrupts. We've contrived to do everything we can to avoid interrupts... We only do two per frame. We had to make significant changes in the hardware and the software to accomplish this. We have hardware overlays where we give two layers to the title and one layer to the system and the title can render completely asynchronously and have them presented completely asynchronously to what's going on system-side.
System-side it's all integrated with the Windows desktop manager but the title can be updating even if there's a glitch - like the scheduler on the Windows system side going slower... we did an awful lot of work on the virtualisation aspect to drive that and you'll also find that running multiple system drove a lot of our other systems. We knew we wanted to be 8GB and that drove a lot of the design around our memory system as well.
http://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview
Pennylbk escribió:Nunca a poder equipararse la Xbox one a un pc de gama alta, porque de aumentar mucho el rendimiento en directx 12 de la Xbox, las graficas de pc se aprovecharan de la API igualmente, con lo cual, aumentaría el rendimiento de ambas, pero seguiría existiendo la brecha.
Pennylbk escribió:Me refiero a que la potencia bruta es la que es. Si la potencia bruta de la xbox one es 30 y la del pc es 100, al usar las dos la misma API, van a amentar las dos el rendimiento, pero van a seguir manteniendo la misma diferencia en el nuevo rendimiento.
Pennylbk escribió:Me refiero a que la potencia bruta es la que es. Si la potencia bruta de la xbox one es 30 y la del pc es 100, al usar las dos la misma API, van a amentar las dos el rendimiento, pero van a seguir manteniendo la misma diferencia en el nuevo rendimiento.
URTYK escribió:Pennylbk escribió:Nunca a poder equipararse la Xbox one a un pc de gama alta, porque de aumentar mucho el rendimiento en directx 12 de la Xbox, las graficas de pc se aprovecharan de la API igualmente, con lo cual, aumentaría el rendimiento de ambas, pero seguiría existiendo la brecha.
A que brecha te refieres?
A la de la pasta no?
albert89 escribió:A ver,yo no he querido liar esta polemica de consolas vs pc,pero lo que digo es que me parece muy atractiva la one y a veces me da el arrebato de comprarmela y añadir un pc normal o portatil al presupuesto(750€)otras veces como dicen que el pc es mejor que tal cual que se puede jugar con mando perfectamente y las ofertas de steam pues me da por el pc,total que estoy hecho un lio y queria saber si xbox one llega a 1080p a 30-40fps me sobra y paso del pc gamer y me la piyo ya,que no mejora pues me puyo el pc,la cosa es que necesito un pc si o si para estudiar y bajar pelis pero muchas veces veo la tentacion de la one a 380€ connjuefgo incluido y forzas y eso y me dan ganas de piyarla,(piyo un pc por 300€ para estudio y ya tengo las 2)por eso queria resolver mi duda para saber si puedo aprovechar la tv que tengo recien comprada(sony w855 60 led)o irme al pc para asegurarme de los 1080...(consola o pc ira a tv)
Solo queria saber eso,gracias!
albert89 escribió:A ver,yo no he querido liar esta polemica de consolas vs pc,pero lo que digo es que me parece muy atractiva la one y a veces me da el arrebato de comprarmela y añadir un pc normal o portatil al presupuesto(750€)otras veces como dicen que el pc es mejor que tal cual que se puede jugar con mando perfectamente y las ofertas de steam pues me da por el pc,total que estoy hecho un lio y queria saber si xbox one llega a 1080p a 30-40fps me sobra y paso del pc gamer y me la piyo ya,que no mejora pues me puyo el pc,la cosa es que necesito un pc si o si para estudiar y bajar pelis pero muchas veces veo la tentacion de la one a 380€ connjuefgo incluido y forzas y eso y me dan ganas de piyarla,(piyo un pc por 300€ para estudio y ya tengo las 2)por eso queria resolver mi duda para saber si puedo aprovechar la tv que tengo recien comprada(sony w855 60 led)o irme al pc para asegurarme de los 1080...(consola o pc ira a tv)
Solo queria saber eso,gracias!
Pennylbk escribió:Me refiero a que la potencia bruta es la que es. Si la potencia bruta de la xbox one es 30 y la del pc es 100, al usar las dos la misma API, van a amentar las dos el rendimiento, pero van a seguir manteniendo la misma diferencia en el nuevo rendimiento.
albert89 escribió:Pero si se puede jugar con pad inalámbrico que dif hay?Lo de la nube nk estoy muy informado,el pc qie tenia pensado era i5+r9 280 sin x
Pero a veces veo la one y tanpoco noto mucha dif...
Pennylbk escribió:Nunca a poder equipararse la Xbox one a un pc de gama alta, porque de aumentar mucho el rendimiento en directx 12 de la Xbox, las graficas de pc se aprovecharan de la API igualmente, con lo cual, aumentaría el rendimiento de ambas, pero seguiría existiendo la brecha.
cercata escribió:Está claro que un PC pepino siempre se va a ver mejor, pero como lo que mas vende son los juegos en consola, los diseñadores hacen el juego con la consola en mente, y las mejoras para PC luego son mas resolución, mas framerate, mejor antialiasing y cosas así ... no es como si hiciesen juegos con un i7 y una GTX 970 en mente.
De momento, hasta los mas listillos de "GAMESPOT" no son capaces de darse cuenta de las diferencias entre versiones, en la prueba final se la dan bien con queso:
https://www.youtube.com/watch?v=OiogSjppLlI
eloskuro escribió:estaria bien que volvieseis al topic y tal
Recluta_Patoso escribió:eloskuro escribió:estaria bien que volvieseis al topic y tal
Y que hoy empieza la nueva generacion. OHHHH YEAHHHH !!!!!
albert89 escribió:Recluta_Patoso escribió:eloskuro escribió:estaria bien que volvieseis al topic y tal
Y que hoy empieza la nueva generacion. OHHHH YEAHHHH !!!!!
Por que?
Nosomi escribió:Zokormazo escribió:[...]
Mas que como un livecd, son como imagenes de maquinas en un entorno de maquinas virtuales. Y que estos se basen en un unica imagen o en distintas no tiene porque suponer ninguna diferencia de rendimiento.
Vamos, el hypervisor carga un exclusive os distinto por cada juego, en vez de cargar siempre el mismo. El unico pero en cuanto a rendimiento es el tener que cargar uno distinto cada vez que cambies de juego.
En otros aspectos si tiene pros y contras.
El modelo de una imagen SO por cada juego tiene el pro de evitar problemas de compatibilidad por actualizaciones del SO de exclusive OS.
A su vez tiene la contra de que para poder incluir mejoras de rendimiento en ese exclusive os, habria que actualizar las exclusive os juego por juego.
O vamos lo puse de ejemplo, porque es lo primero que me vino a la cabeza, un livecd es una imagen pues , ya se que se instala, pero es que yo dudo un poco por que cada juego deberia traer su OEM o su OS, que deberia proporcionar Microsoft y cada desarrollador trabajar en ello puff esto es arma de dos filos o se optimiza al maximo o te dara dolores de cabeza.
Por otra el Hypervisor (VWware hypervisor que yo utilizo, tengo una granjita de 8 servers virtuales) hace sus provicionamientos de requerimientos como son cores, ram y uso del disco, etc ip, si La ONE crea una maquina virtual de cada juego ese provisionamiento debe ser de un mismo punto...no se tengo bastantes dudas y me ha interesado esto.
Alguien tiene algun enlace????, donde se pueda ver tecnicamente .
Besos
cercata escribió:Está claro que un PC pepino siempre se va a ver mejor, pero como lo que mas vende son los juegos en consola, los diseñadores hacen el juego con la consola en mente, y las mejoras para PC luego son mas resolución, mas framerate, mejor antialiasing y cosas así ... no es como si hiciesen juegos con un i7 y una GTX 970 en mente.
De momento, hasta los mas listillos de "GAMESPOT" no son capaces de darse cuenta de las diferencias entre versiones, en la prueba final se la dan bien con queso:
https://www.youtube.com/watch?v=OiogSjppLlI
vihuquinpa escribió:
Hoy empiezan las conferencias de Microsoft en la GDC.Supongo que lo dice por eso.
URTYK escribió:Sí os enterais de algún sitio o algo que vayan contando la confe de Phil Spencer, compartirlo!!!
ikaro3d escribió:Nada de nada el final no anunicaran nada
atlasFTW escribió:Xbox One SDK Preview se lanzará hoy!
EDIT: Epic, con imágenes prealpha de Unreal tournament en DX12
EDIT2: DX12 da un boost a la GPU de One de un 20%