First Thoughts
Bringing our preview of DirectX 12 to a close, what we’re seeing today is both a promising sign of what has been accomplished so far and a reminder of what is left to do. As it stands much of DirectX 12’s story remains to be told – features, feature levels, developer support, and more will only finally be unveiled by Microsoft next month at GDC 2015. So today’s preview is much more of a beginning than an end when it comes to sizing up the future of DirectX.
But for the time being we’re finally at a point where we can say the pieces are coming together, and we can finally see parts of the bigger picture. Drivers, APIs, and applications are starting to arrive, giving us our first look at DirectX 12’s performance. And we have to say we like what we’ve seen so far.
With DirectX 12 Microsoft and its partners set out to create a cross-vendor but still low-level API, and while there was admittedly little doubt they could pull it off, there has always been the question of how well they could do it. What kind of improvements and performance could you truly wring out of a new API when it has to work across different products and can never entirely avoid abstraction? The answer as it turns out is that you can still enjoy all of the major benefits of a low-level API, not the least of which are the incredible improvements in CPU efficiency and multi-threading.
That said, any time we’re looking at an early preview it’s important to keep our expectations in check, and that is especially the case with DirectX 12. Star Swarm is a best case scenario and designed to be a best case scenario; it isn’t so much a measure of real world performance as it is technological potential.
But to that end, it’s clear that DirectX 12 has a lot of potential in the right hands and the right circumstances. It isn’t going to be easy to master, and I suspect it won’t be a quick transition, but I am very interested in seeing what developers can do with this API. With the reduced overhead, the better threading, and ultimately a vastly more efficient means of submitting draw calls, there’s a lot of potential waiting to be exploited.
Pada escribió:Preview de anandtech de directx12, de momento PC obviamente, pero promete
http://www.anandtech.com/show/8962/the- ... star-swarm
CPU
GPU
dx12 vs mantle
FrameTime
ConclusionesFirst Thoughts
Bringing our preview of DirectX 12 to a close, what we’re seeing today is both a promising sign of what has been accomplished so far and a reminder of what is left to do. As it stands much of DirectX 12’s story remains to be told – features, feature levels, developer support, and more will only finally be unveiled by Microsoft next month at GDC 2015. So today’s preview is much more of a beginning than an end when it comes to sizing up the future of DirectX.
But for the time being we’re finally at a point where we can say the pieces are coming together, and we can finally see parts of the bigger picture. Drivers, APIs, and applications are starting to arrive, giving us our first look at DirectX 12’s performance. And we have to say we like what we’ve seen so far.
With DirectX 12 Microsoft and its partners set out to create a cross-vendor but still low-level API, and while there was admittedly little doubt they could pull it off, there has always been the question of how well they could do it. What kind of improvements and performance could you truly wring out of a new API when it has to work across different products and can never entirely avoid abstraction? The answer as it turns out is that you can still enjoy all of the major benefits of a low-level API, not the least of which are the incredible improvements in CPU efficiency and multi-threading.
That said, any time we’re looking at an early preview it’s important to keep our expectations in check, and that is especially the case with DirectX 12. Star Swarm is a best case scenario and designed to be a best case scenario; it isn’t so much a measure of real world performance as it is technological potential.
But to that end, it’s clear that DirectX 12 has a lot of potential in the right hands and the right circumstances. It isn’t going to be easy to master, and I suspect it won’t be a quick transition, but I am very interested in seeing what developers can do with this API. With the reduced overhead, the better threading, and ultimately a vastly more efficient means of submitting draw calls, there’s a lot of potential waiting to be exploited.
Pada escribió:Preview de anandtech de directx12, de momento PC obviamente, pero promete
http://www.anandtech.com/show/8962/the- ... star-swarm
CPU[img]http://images.anandtech.com/graphs/graph8962/71448.png[/img
GPU
dx12 vs mantle
FrameTime
ConclusionesFirst Thoughts
Bringing our preview of DirectX 12 to a close, what we’re seeing today is both a promising sign of what has been accomplished so far and a reminder of what is left to do. As it stands much of DirectX 12’s story remains to be told – features, feature levels, developer support, and more will only finally be unveiled by Microsoft next month at GDC 2015. So today’s preview is much more of a beginning than an end when it comes to sizing up the future of DirectX.
But for the time being we’re finally at a point where we can say the pieces are coming together, and we can finally see parts of the bigger picture. Drivers, APIs, and applications are starting to arrive, giving us our first look at DirectX 12’s performance. And we have to say we like what we’ve seen so far.
With DirectX 12 Microsoft and its partners set out to create a cross-vendor but still low-level API, and while there was admittedly little doubt they could pull it off, there has always been the question of how well they could do it. What kind of improvements and performance could you truly wring out of a new API when it has to work across different products and can never entirely avoid abstraction? The answer as it turns out is that you can still enjoy all of the major benefits of a low-level API, not the least of which are the incredible improvements in CPU efficiency and multi-threading.
That said, any time we’re looking at an early preview it’s important to keep our expectations in check, and that is especially the case with DirectX 12. Star Swarm is a best case scenario and designed to be a best case scenario; it isn’t so much a measure of real world performance as it is technological potential.
But to that end, it’s clear that DirectX 12 has a lot of potential in the right hands and the right circumstances. It isn’t going to be easy to master, and I suspect it won’t be a quick transition, but I am very interested in seeing what developers can do with this API. With the reduced overhead, the better threading, and ultimately a vastly more efficient means of submitting draw calls, there’s a lot of potential waiting to be exploited.
dicanio1 escribió:Un resumen en español..
Simplemente para que quede claro porque de tanto leer que la one nada de nada,estaría bien dejarlo claro para los negacionistas..
Bueno y así nos enteramos todos ,un resumen para los que no lo hemos entendido muy bien..
Primeros Pensamientos
Llevar nuestra vista previa de DirectX 12 a su fin, lo que estamos viendo hoy es a la vez un signo prometedor de lo que se ha logrado hasta ahora y un recordatorio de lo que queda por hacer. Tal como está la mayor parte de la historia de DirectX 12 sigue siendo que se les diga - características, niveles de rendimiento, soporte para desarrolladores, y más serán sólo finalmente se dio a conocer por Microsoft el próximo mes en la GDC 2015. Así previsualización de hoy es mucho más de un principio que un fin cuando viene a dimensionar el futuro de DirectX.
Pero por el momento, por fin estamos en un punto donde podemos decir que las piezas se unen, y por fin podemos ver partes de la imagen más grande. Drivers, APIs y aplicaciones están empezando a llegar, que nos da nuestra primera mirada en el rendimiento de DirectX 12. Y tenemos que decir que nos gusta lo que hemos visto hasta ahora.
Con DirectX 12 Microsoft y sus socios se propusieron crear un proveedor cruz pero aún API de bajo nivel, y si bien no era la verdad es pocas dudas que pudieran llevarlo a cabo, no siempre ha sido la cuestión de lo bien que podían hacerlo. ¿Qué tipo de mejoras y el rendimiento podría realmente sacar hacia fuera de una nueva API cuando se tiene que trabajar en diferentes productos y no puede evitar del todo la abstracción? La respuesta resulta que es que todavía se puede disfrutar de todos los beneficios principales de la API de bajo nivel, no menos importante de las cuales son las mejoras increíbles en la eficiencia de la CPU y multi-threading.
Dicho esto, cualquier momento que estamos buscando en una primera vista previa que es importante mantener nuestras expectativas en el registro, y eso es especialmente el caso con DirectX 12. Estrella Swarm es un mejor de los casos y diseñado para ser el mejor de los casos; que no es tanto una medida de desempeño en el mundo real como potencial tecnológico.
Pero para ello, está claro que DirectX 12 tiene un gran potencial en las manos adecuadas y las circunstancias adecuadas. No va a ser fácil de dominar, y sospecho que no será una transición rápida, pero estoy muy interesado en ver lo que los desarrolladores pueden hacer con esta API. Con la sobrecarga reducida, mejor threading, y en última instancia, un medio mucho más eficaz para la presentación de las llamadas de sorteo, hay un gran potencial de espera para ser explotado.
Magnarock escribió:¿Como es posible que DirectX 12 rinda peor en configuracion AMD que Mantle, si Microsoft pago a AMD la friolera de 2 Millones de dolares americanos para comprarles Mantle y cambiarle el nombre y ponerle DirectX 12?
Otro mito que se cae, ya van unos cuantos ya...
2015 va ser el año de quitar caretas y que se descubra la realidad.
Eso debe ser cierto xq como sabemos todos los juegos en XBOX ONE rinden 5x, mínimo, que sus versiones en PC.
pspskulls escribió:¿Hay que sorprenderse de éstas gráficas con DX12?... Yo aún diría que no me sorprenden lo suficiente
Oxide Games has emailed us this evening with a bit more detail about what's going on under the hood, and why Mantle batch submission times are higher. When working with large numbers of very small batches, Star Swarm is capable of throwing enough work at the GPU such that the GPU's command processor becomes the bottleneck. For this reason the Mantle path includes an optimization routine for small batches (OptimizeSmallBatch=1), which trades GPU power for CPU power, doing a second pass on the batches in the CPU to combine some of them before submitting them to the GPU. This bypasses the command processor bottleneck, but it increases the amount of work the CPU needs to do (though note that in AMD's case, it's still several times faster than DX11).
We also took the opportunity to go and highly customise the command processor on the GPU. Again concentrating on CPU performance... The command processor block's interface is a very key component in making the CPU overhead of graphics quite efficient. We know the AMD architecture pretty well - we had AMD graphics on the Xbox 360 and there were a number of features we used there. We had features like pre-compiled command buffers where developers would go and pre-build a lot of their states at the object level where they would [simply] say, "run this". We implemented it on Xbox 360 and had a whole lot of ideas on how to make that more efficient [and with] a cleaner API, so we took that opportunity with Xbox One and with our customised command processor we've created extensions on top of D3D which fit very nicely into the D3D model and this is something that we'd like to integrate back into mainline 3D on the PC too - this small, very low-level, very efficient object-orientated submission of your draw [and state] commands.
Pada escribió:Extracto del artículo de anandtechOxide Games has emailed us this evening with a bit more detail about what's going on under the hood, and why Mantle batch submission times are higher. When working with large numbers of very small batches, Star Swarm is capable of throwing enough work at the GPU such that the GPU's command processor becomes the bottleneck. For this reason the Mantle path includes an optimization routine for small batches (OptimizeSmallBatch=1), which trades GPU power for CPU power, doing a second pass on the batches in the CPU to combine some of them before submitting them to the GPU. This bypasses the command processor bottleneck, but it increases the amount of work the CPU needs to do (though note that in AMD's case, it's still several times faster than DX11).
Recordemos por 239874209348 vez, de la entrevista:We also took the opportunity to go and highly customise the command processor on the GPU. Again concentrating on CPU performance... The command processor block's interface is a very key component in making the CPU overhead of graphics quite efficient. We know the AMD architecture pretty well - we had AMD graphics on the Xbox 360 and there were a number of features we used there. We had features like pre-compiled command buffers where developers would go and pre-build a lot of their states at the object level where they would [simply] say, "run this". We implemented it on Xbox 360 and had a whole lot of ideas on how to make that more efficient [and with] a cleaner API, so we took that opportunity with Xbox One and with our customised command processor we've created extensions on top of D3D which fit very nicely into the D3D model and this is something that we'd like to integrate back into mainline 3D on the PC too - this small, very low-level, very efficient object-orientated submission of your draw [and state] commands.
Vamos que será toda una sorpresa (ironia) empezar a ver doble command processor en las gráficas full dx12 "de verdad" (feature level 12) que se anuncien dentro de poco, justo como lleva la one.
papatuelo escribió:Pada escribió:Extracto del artículo de anandtechOxide Games has emailed us this evening with a bit more detail about what's going on under the hood, and why Mantle batch submission times are higher. When working with large numbers of very small batches, Star Swarm is capable of throwing enough work at the GPU such that the GPU's command processor becomes the bottleneck. For this reason the Mantle path includes an optimization routine for small batches (OptimizeSmallBatch=1), which trades GPU power for CPU power, doing a second pass on the batches in the CPU to combine some of them before submitting them to the GPU. This bypasses the command processor bottleneck, but it increases the amount of work the CPU needs to do (though note that in AMD's case, it's still several times faster than DX11).
Recordemos por 239874209348 vez, de la entrevista:We also took the opportunity to go and highly customise the command processor on the GPU. Again concentrating on CPU performance... The command processor block's interface is a very key component in making the CPU overhead of graphics quite efficient. We know the AMD architecture pretty well - we had AMD graphics on the Xbox 360 and there were a number of features we used there. We had features like pre-compiled command buffers where developers would go and pre-build a lot of their states at the object level where they would [simply] say, "run this". We implemented it on Xbox 360 and had a whole lot of ideas on how to make that more efficient [and with] a cleaner API, so we took that opportunity with Xbox One and with our customised command processor we've created extensions on top of D3D which fit very nicely into the D3D model and this is something that we'd like to integrate back into mainline 3D on the PC too - this small, very low-level, very efficient object-orientated submission of your draw [and state] commands.
Vamos que será toda una sorpresa (ironia) empezar a ver doble command processor en las gráficas full dx12 "de verdad" (feature level 12) que se anuncien dentro de poco, justo como lleva la one.
Cómo???
Que han utilizado tecnología que aun no estaba en el mercado???
Guau, quien se lo podría haber imaginado teniendo en cuenta que todos los antecedentes apuntaban directamente a ello. Que tios!!! Como nos han engañao!!!
comance escribió:Todo esos datos están muy bien y todo.
Pero después ves juegos como la beta del battlefield hardline a 720 p , o el mismísimo halo 5 a la misma resolución y a uno pues se le caen los ojos al suelo.
Yo estoy deseando que se cumplan vuestras profecías para comprarme una pero no hay manera.
comance escribió:Todo esos datos están muy bien y todo.
Pero después ves juegos como la beta del battlefield hardline a 720 p , o el mismísimo halo 5 a la misma resolución y a uno pues se le caen los ojos al suelo.
Yo estoy deseando que se cumplan vuestras profecías para comprarme una pero no hay manera.
comance escribió:Todo esos datos están muy bien y todo.
Pero después ves juegos como la beta del battlefield hardline a 720 p , o el mismísimo halo 5 a la misma resolución y a uno pues se le caen los ojos al suelo.
Yo estoy deseando que se cumplan vuestras profecías para comprarme una pero no hay manera.
comance escribió:Pues ojala y sea verdad .
Yo no soy de tener 2 consolas y menos si hay que pagar el online de las 2 , pero si sacan un Halo en dx 12 como decís donde se note en el tema gráfico , vendo mi ps4 y me compro una one.
Pero personalmente veo que microsoft tiene el tiempo en su contra.
Se les esta pegando el arroz.
Cuantos años van a aguantar esta generación la competencia.
“On the DX12 question, I was asked early on by people if DX12 is gonna dramatically change the graphics capabilities of Xbox One and I said it wouldn’t. I’m not trying to rain on anybody’s parade, but the CPU, GPU and memory that are on Xbox One don’t change when you go to DX12. DX12 makes it easier to do some of the things that Xbox One’s good at, which will be nice and you’ll see improvement in games that use DX12, but people ask me if it’s gonna be dramatic and I think I answered no at the time and I’ll say the same thing.”
josemayuste escribió:Tenéis opiniones muy interesantes sobre el tema de DirectX12 en NeoGAF , por ejemplo , del propio Durante , el creador de GeDoSaTo y DSfix , y de gente que sabe realmente de qué va el tema
http://m.neogaf.com/showthread.php?t=987116&page=5
también comentan declaraciones de Phil Spencer:“On the DX12 question, I was asked early on by people if DX12 is gonna dramatically change the graphics capabilities of Xbox One and I said it wouldn’t. I’m not trying to rain on anybody’s parade, but the CPU, GPU and memory that are on Xbox One don’t change when you go to DX12. DX12 makes it easier to do some of the things that Xbox One’s good at, which will be nice and you’ll see improvement in games that use DX12, but people ask me if it’s gonna be dramatic and I think I answered no at the time and I’ll say the same thing.”
"Sobre DirectX12 , ya me preguntaron si DirectX12 va a cambiar de forma dramática las capacidades gráficas de Xbox One y dije que no lo haría. No estoy intentando aguarle la fiesta a nadie, pero la CPU , GPU y la memoria que hay en Xbox One no cambia cuando pasas a DirectX12. directX12 hace que sea más fácil hacer algunas de las cosas en las que Xbox One es buena de por sí , lo cual será bueno y se verá una mejora en juegos que usen DirectX12, pero la gente me pregunta si va a ser un cambio drástico y creo que ya respondí que no en aquella ocasión y seguiré diciendo lo mismo"
Habría que saber en qué contexto le preguntaron , y porqué declaro éso exactamente , él seguramente sepa lo justito del tema y simplemente se dedique a hacer su trabajo como cara visible de la marca
también se comenta que el cambio podría ser como pasar del Perfect Dark Zero de Xbox 360 al Halo 4 , de Xbox 360 también , dicho por Phil Spencer
lo dicho , está muy interesante el tema en GAF.
un saludo.
josemayuste escribió:@papatuelo ¿Y hay algo en mi mensaje que diga lo contrario?.
comance escribió:Todo esos datos están muy bien y todo.
Pero después ves juegos como la beta del battlefield hardline a 720 p , o el mismísimo halo 5 a la misma resolución y a uno pues se le caen los ojos al suelo.
Yo estoy deseando que se cumplan vuestras profecías para comprarme una pero no hay manera.
david1995w escribió:comance escribió:Todo esos datos están muy bien y todo.
Pero después ves juegos como la beta del battlefield hardline a 720 p , o el mismísimo halo 5 a la misma resolución y a uno pues se le caen los ojos al suelo.
Yo estoy deseando que se cumplan vuestras profecías para comprarme una pero no hay manera.
Pero Halo 5 sí que es una beta y aún queda casi 1 año de aquí a que salga. Además que dijeron que no se habían centrado en el apartado gráfico para la beta. Que no te quepa la menor duda, que Halo 5, mínimo, sale a 900p 60fps.