Board/Device popularity - Off-topic

I was poking around the boards this morning and got curious as to which one might be the most popular and, by extension, which device is most popular. So, I pulled all the current 'viewers' from the boards and did a little math. To be honest, I was a little surprised as to how far up the chart some of the devices were (and how low others were).
Code:
Device Viewers
Blackstone 526
Diamond 485
Dream 416
Raphael 275
Kaiser 215
Xperia 212
Sapphire 162
Topaz 132
Hermes 119
Wizard 84
Rhodium 79
Polaris 77
Trinity 51
Elf 49
Universal 47
Artemis 42
Atom 40
Excalibur 40
Prophet 33
Blue Angel 32
Himalaya / Andes 24
Vogue 23
Vox 22
Nike 21
Titan 21
Athena 20
Magician 20
Sable 19
Jade 18
Treo 750 15
Herald 13
Gene 12
Wallaby 10
Shift 8
Tornado 8
Opal 7
Wings 7
Treo Pro 6
Alpine 5
Rose 5
Iolite 4
StarTrek 4
Charmer 3
Apache 2
Breeze 2
Hero 2
Juno 2
Oxygen 2
Pharos 2
Typhoon 2
Cavalier 1
Hurricane 1
Maple 1
Monet 1
Sedna 1
Beetles 0
Libra 0
Keep in mind, this is at 10:15am EDT. I might do this again later this evening and see what I come up with.

Interesting results

it seems that these results should be taken about every hour, then filter through them based on the countries and time zones where these particular models were sold in without the need of importing.
I have the feeling that certain hours, especially afternoon and early night hours in the US, will have a much higher ratio of Raphaels and Kaisers than during any other time. On the same note, the Blackstone will almost certainly be at it's best during similar hours in Western Europe. Just my assumption.

Go Blackstone

Thought I'd run another report this afternoon. Seems like the boards are a lot less busy on a Thursday afternoon (at least on the US East Coast).
Code:
Dream 524
Blackstone 511
Diamond 486
Raphael 314
Kaiser 265
Xperia 224
Topaz 140
Sapphire 110
Hermes 98
Rhodium 91
Wizard 78
Universal 67
Polaris 62
Excalibur 54
Prophet 49
Elf 47
Artemis 46
Trinity 35
Vogue 33
Herald 30
Athena 25
Nike 23
Blue Angel 22
Titan 22
Himalaya & Andes 20
Magician 20
Vox 18
Atom 16
Jade 16
Sable 16
Treo 750 16
Gene 9
Hero 9
StarTrek 9
Tornado 8
Charmer 7
Alpine 5
Iolite 5
Pharos 5
Shift 5
Opal 4
Treo Pro 4
Wings 4
Hurricane 3
Juno 3
Maple 3
Rose 3
Apache 2
Typhoon 2
Wallaby 2
Cavalier 1
Monet 1
Oxygen 1
Breeze 0
Libra 0
Sedna 0
Again, take this popularity contest with a grain of salt. Pulled @ 2009-07-09 16:08 EDT
Edit:
I just noticed the Beetles board is gone. Hope I didn't have anything to do with it.

Related

Nokia 8 vs nokia 7 plus

Hi guys.i am in a dilemma.i want to buy a nokia phone which one do you prefer for buying?7 plus or 8?actually price of both of them are the same in our country but 7 plus is a little more expensive. Android pie is enough for me and as i searched a little,android q doesnt have lots differences versus of pie.
I don't know about Nokia 8, 7 plus have manufacturing defect so be prepared to maybe get a defective screen and for sure you will have to change the usb c port before 1 year, sometimes even after 3 months.

K20 Pro VS Premium Edition

Is it worth to pay extra 70$ for the upgrade of premium edition 512 / 12 instead of k20 pro 256 / 8
I am still waiting for my k20 pro by start of november so I am thinking of selling it sealed new when it arrives and add 70$ difference to get the premium top version
I am thinking of this but at the same time feeling I wonot feel the difference and note that I am not a gamer but I love performance and speedy UI
I had a poco with 256 / 8 with custom roms earlier
I am a kinda of person that always seeks the top variant or trims to have the max specs within a value for money concept
- 8 Ram vs 12 Ram , feels that I won't feel a difference here
- 512 GB vs 256 GB ( the MIN I can deal with is 128 while a 3rd card slot is present like realme as dual sim is a must for me ) , comes in handy sometimes but if we think straight then an external hard drive would do the job for movies, series , etc
-855+ vs 855 , this is my main question if it worth it or not ???!! as I saw in gsmarena review that antutu for k20 pro is around 37X000 while Premi Edition is around 45X000
Please guys help me in this one

Mate 40 pro vs Mate 40 pro plus

Hi just wanted provide some details between the differences between the two this time difference is not just a extra 10x camera and cemeric back but alot more
1 . Extra 10x camera
2. SFS 1.0 ( 2 fold faster than ufs 3.1)* see the explanation below
3. 12 gb ram standard on both 256 gb and 512 gb
4. OIS built in 50MP camera please note that mate 40 pro has the same camera but without OIS
5.Ceramic back
6. 3D Depth Sensing Camera instead of laser sensor which you get on mate 40 pro
all in all mate 40 pro plus is a different phone than mate 40 pro
price difference isnt much on JD.com mate 40 pro 256gb is 6999 yaun ( 1060$) and mate 40 pro plus 256gb is 8999 yaun ( 1360$) so for 300$ more you get so much more that matters alot.
Based on the results of measuring the homegrown Huawei SFS 1.0 flash module, it was possible to establish that it provides almost a twofold increase in the speed of operation in comparison with the UFS 3.1 flash memory.
According to reports, the sequential write speed was 1280 MB/s, and random 548 MB/s.
In comparison, these figures for UFS 3.1 are 700 MB/s, and 200-300 MB/s, respectively, GizChina reported.
asiatimes . com/2020/11/huaweis-mate-40-pro-has-a-little-secret-under-its-hood/
I wish the Pro+ was available internationally. I bought a standard Pro model for myself

Complete Guide to NVIDIA: Everything You Need To Know

NVIDIA continues its dominance in the graphics card industry. The company’s primary GPU lineup under the GeForce brand has been around for over two decades with close to twenty iterations. The series includes discrete graphics processors for desktops and laptops. Fun fact- the name GeForce originally stood for "Geometry Force" since GeForce 256 was the first GPU for personal computers to calculate the transform-and-lighting geometry.
NVIDIA GeForce generations
Here’s a look at NVIDIA’s GeForce lineup:
GeForce 256
The first GeForce GPU in the lineup, the GeForce 256 (NV10) was launched in September 1999 and was the first consumer-level PC graphics chip that shipped with hardware transform, lighting, and shading.
GeForce 2 series
The following year NVIDIA launched the GeForce2 (NV15) that introduced a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. This was followed by the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a reduced cost.
GeForce 3 series
In 2001 NVIDIA launched the GeForce3 (NV20) which introduced programmable vertex and pixel shaders. A version of GeForce 3 codenamed NV2A was developed for the Microsoft Xbox game console.
GeForce 4 series
In February 2002 the GeForce4 Ti (NV25) was launched as a refinement to the GeForce3. It included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. The GeForce4 MX was also introduced as a budget option based on the GeForce2, with the addition of some features from the GeForce4 Ti.
GeForce FX series
The GeForce FX (NV30) introduced a big change in architecture. It brought support for the new Shader Model 2 specification and carried the 5000 model number, as it was the fifth generation of the GeForce family. The series was also infamous for its heating and noisy fan issues.
GeForce 6 series
Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 and fixed the weak floating point shader performance of its predecessor. It additionally implemented high-dynamic-range imaging, SLI (Scalable Link Interface), and PureVideo capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing).
GeForce 7 series
The GeForce 7 series (G70/NV47) was introduced in June 2005 and was the last NVIDIA GPU series to support the AGP bus. It offered a wider pipeline and an increase in clock speed along with new transparency supersampling and transparency multisampling anti-aliasing modes (TSAA and TMAA). A version of the 7950 GT, called the RSX 'Reality Synthesizer', was used as the primary GPU on the Sony PlayStation 3.
GeForce 8 series
The first GeForce (G80) to fully support Direct3D 10, the 8th-gen GeForce series was launched in 2006. It was made using a 90nm process and built around the new Tesla microarchitecture. It was eventually refined and the die size was shrunk down to 65nm. The revised design codenamed G92 was implemented into the 8 series with the 8800GS, 8800GT, and 8800GTS-512, and was launched in 2007.
GeForce 9 series
Revisions for the GeForce 8 series were introduced after a short period in 2008 where the 9800GX2 used two G92 GPUs, in a dual PCB configuration with a single PCI-Express 16x slot. It also included two separate 256-bit memory busses, one for each GPU and a total of 1GB of memory on the card. Later the 9800GTX was launched with a single G92 GPU, 256-bit data bus, and 512 MB of GDDR3 memory.
GeForce 100 series
The following year NVIDIA launched the GeForce 100 series which were essentially rebranded versions of the GeForce 9 series available only for OEMs, although the GTS 150 was briefly available to consumers.
GeForce 200
The GeForce 200 series included the GT200 65nm-based graphics processor that had a total of 1.4 billion transistors and was introduced in 2008. It was also the year when NVIDIA changed its card-naming scheme by replacing the series number with the GTX or GTS suffix and then adding model numbers after that. The series features the new GT200 core on a 65nm die. The GeForce GTX 260 and the GTX 280 were the first products in the series while the GeForce 310 was released in November 2009 as a rebrand of GeForce 210.
GeForce 300 series
The 300 series cards were launched during the same year and were pretty much rebranded versions of the 200 series with support for DirectX 10.1 and based on the newer Fermi architecture. These were limited to OEMs only.
GeForce 400 series
The GeForce 400 series, codenamed GF100 was introduced in 2010 based on the Fermi architecture. They were the first NVIDIA GPUs to utilize 1GB or more of GDDR5 memory. The GTX 470 and GTX 480 were criticized for their high power use, high temperatures, and loud noise. At the same time, the GTX 480 was the fastest DirectX 11 card.
GeForce 500 series
To fix the issues, NVIDIA brought the 500 series with a new flagship (GTX 580) GPU based on an enhanced GF100 architecture (GF110). It offered higher performance, less power utilization, heat, and noise than the preceding GTX 480. Additionally, the GTX 590 was also introduced that packed two GF110 GPUs on a single card.
GeForce 600 series
In 2010, NVIDIA announced the Kepler microarchitecture, manufactured with the TSMC 28nm fabrication process. The company started supplying their top-end GK110 cores for use in Oak Ridge National Laboratory's Titan supercomputer, leading to a shortage of GK110 cores. Eventually, NVIDIA had to use the GK104 core, which was originally intended for the mid-range segment, to power their flagship, the GTX 680. It was followed by the dual-GK104 GTX 690 and the GTX 670.
GeForce 700 series
In May 2013, NVIDIA announced the 700 series based on the Kepler architecture, although it finally featured a GK110 chipset-based card at the top of the lineup. The GTX 780 was a cut-down version of the GTX Titan that achieved nearly the same performance for two-thirds of the price. A week after the release of the GTX 780, NVIDIA announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 which was also based on the GK104 core and similar to the GTX 660 Ti.
GeForce 800M series
The GeForce 800M series included rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture.
GeForce 900 series
In March 2013, NVIDIA announced the new Maxwell microarchitecture. It was released in September 2014 on the GeForce 900 series and was the last series to support analog video output through DVI-I.
GeForce 10 series
In March 2014, NVIDIA announced that the successor to Maxwell would be the Pascal microarchitecture and was finally introduced on the GeForce 10 series in May 2016. It included 128 CUDA cores per streaming multiprocessor, GDDR5X memory, unified memory, and NVLink.
GeForce 20 series
In August 2018, NVIDIA announced Turing architecture as a successor to Pascal. The new microarchitecture was made to accelerate the real-time ray tracing support and AI Inferencing. It included a new Ray Tracing unit (RT Core) which dedicated processors to the ray tracing in hardware. It also supported the DXR extension in Microsoft DirectX 12. The company also introduced DLSS (Deep Learning Super Sampling), a new form of anti-aliasing that used AI to provide sharper imagery with less impact on performance.
The first GPUs to utilize the architecture were primarily aimed at high-end professionals and were introduced under the Quadro series. Eventually, the GeForce RTX series with RTX 2080 Ti, 2080, and 2070 were announced in 2018 followed by the RTX 2060 in January 2019.
In July 2019, NVIDIA announced the GeForce RTX Super line of cards, a refresh of the RTX 20 series which featured higher-spec versions of the RTX 2060, 2070, and 2080.
GeForce 16 series
In February 2019, NVIDIA announced the GeForce 16 series. Based on the same Turing architecture used in the GeForce 20 series, this series omitted the Tensor (AI) and RT (ray tracing) cores. This series continues to offer a more affordable graphics solution for gamers while still attaining a higher performance compared to respective cards of the previous GeForce generations. Similar to the RTX Super refresh, NVIDIA announced the GTX 1650 Super and 1660 Super cards, in October 2019.
GeForce 30 series
The latest and the most powerful graphics cards from NVIDIA, the new 30-series take over from the 20-series and were announced in 2020. It introduced a massive jump over the predecessor and an excellent price to performance ratio. However, getting your hands on one is a difficult task.
Mobile GPUs
NVIDIA produced a wide range of graphics cards for notebooks as far as the GeForce 2 series, under the GeForce Go branding. Most of the features present in the desktop counterparts were made available on the mobile version. With the introduction of the GeForce 8 series, the GeForce Go brand was discontinued and mobile GPUs were now a part of the main GeForce GPUs, with an M suffix. Once again NVIDIA brought some changes and dropped the M suffix in 2016 with the launch of the laptop GeForce 10 series in an attempt to unify the branding between their desktop and laptop GPU offerings. Currently, the RTX 20, GTX 16 and RTX 30 series of GPUs are available as both desktop and laptop variants. NVIDIA also has the GeForce MX range of mobile GPUs intended for lightweight notebooks with entry-level performance.
Nomenclature
Ever since the launch of the GeForce 100 series NVIDIA has been using the following naming scheme for its products:
G, GT, No Prefix - Mostly user for entry-level category of graphics cards with the last two numbers ranging from 00 to 45. Example - GeForce GT 730, GeForce GT 1030
GTS, GTX, RTX - Mid-range category of graphics cards with the last two numbers ranging from 50 to 65. Example - GeForce GTX 1060, GeForce RTX 2060
GTX, RTX - High-end range of graphics cards with the last two numbers ranging from 70-95. Example - GeForce GTX 1080Ti, GeForce RTX 3090
NVIDIA also uses the ‘Super’ or ‘Ti’ suffixes for its graphics cards to signify incremental updates.
I am having some issues with my GeForce GTX 1050Ti
My motherboard sometimes doesn't detect the card and I am forced to use the Intel inbuild GPU
ahfdee said:
I am having some issues with my GeForce GTX 1050Ti
My motherboard sometimes doesn't detect the card and I am forced to use the Intel inbuild GPU
Click to expand...
Click to collapse
What you do then? Reboot? Wait? Remove and insert into the slot again?
strongst said:
What you do then? Reboot? Wait? Remove and insert into the slot again?
Click to expand...
Click to collapse
Yes I do it multiple times and if I am lucky it starts working again but on the next reboot it stops and the same cycle continues
ahfdee said:
Yes I do it multiple times and if I am lucky it starts working again but on the next reboot it stops and the same cycle continues
Click to expand...
Click to collapse
Could be a mechanical/thermal issue of the card or the PCIe socket if you already tried all software related solutions like BIOS/driver.
ahfdee said:
I am having some issues with my GeForce GTX 1050Ti
My motherboard sometimes doesn't detect the card and I am forced to use the Intel inbuild GPU
Click to expand...
Click to collapse
I was having the same issue with my gt 730 gddr5 card. I found out that the system memory was causing the problem.
Sidgup1998 said:
I was having the same issue with my gt 730 gddr5 card. I found out that the system memory was causing the problem.
Click to expand...
Click to collapse
How did you fix it?
ahfdee said:
How did you fix it?
Click to expand...
Click to collapse
I replaced my bad memory stick and voila the issue was fixed!!!
Sidgup1998 said:
I replaced my bad memory stick and voila the issue was fixed!!!
Click to expand...
Click to collapse
My RAM has no issues
Any other solutions
ahfdee said:
My RAM has no issues
Any other solutions
Click to expand...
Click to collapse
Did you check the card on another motherboard?
Sidgup1998 said:
Did you check the card on another motherboard?
Click to expand...
Click to collapse
I am not able to
ahfdee said:
I am not able to
Click to expand...
Click to collapse
Try cleaning the slot with some Isopropyl alcohol and a q-tip (make sure you dont leave any fluff behind)
Do you have another PCIe slot available to try?
@kunalneo Thx for the summary.
Was searching for Nvideas supporting UEFI and can be used in Linux Mint as well but didn't find any info yet.
Is there a date from which on they generally do?
ahfdee said:
My RAM has no issues
Any other solutions
Click to expand...
Click to collapse
i sent back a pc for a bad ram stick, but it was really hard to find, passed all dianostics, blue screens with all different errors, found the bad stick by taking one out and running for a while, did great, then swapped sticks out, and wouldnt boot, shipped the next day(i had already contacted the seller and got the return approved). but i got a better system for pretty much the same price
WillisD said:
i sent back a pc for a bad ram stick, but it was really hard to find, passed all dianostics, blue screens with all different errors, found the bad stick by taking one out and running for a while, did great, then swapped sticks out, and wouldnt boot, shipped the next day(i had already contacted the seller and got the return approved). but i got a better system for pretty much the same price
Click to expand...
Click to collapse
I think my GPU has Thermal issues
When I boot from the GPU's HDMI slot after 3 to 4 days of not using the PC the GPU works. Do a need to put some thermal paste?
i'm no expert but if you had thermal issues, they wouldn't show after 3 or 4 days idle, and get msi afterburner and watch temps while using, for thermal shutdown you'd need to be at 100C or higher. Are you on winblows or linux?
Either way do a clean install of drivers and reset nvidia settings. How do you boot from an hdmi slot?
WillisD said:
i'm no expert but if you had thermal issues, they wouldn't show after 3 or 4 days idle, and get msi afterburner and watch temps while using, for thermal shutdown you'd need to be at 100C or higher. Are you on winblows or linux?
Either way do a clean install of drivers and reset nvidia settings. How do you boot from an hdmi slot?
Click to expand...
Click to collapse
I can either boot from Nvidia HDMI slot or default Intel HDMI slot.
To boot from either of the slots i just take out the HDMI cable from one slot and to it into another one
ahfdee said:
I think my GPU has Thermal issues
When I boot from the GPU's HDMI slot after 3 to 4 days of not using the PC the GPU works. Do a need to put some thermal paste?
Click to expand...
Click to collapse
I typically redo my thermal paste about once a year.
Anybody got a list which ones support GOP?
kunalneo said:
NVIDIA continues its dominance in the graphics card industry. The company’s primary GPU lineup under the GeForce brand has been around for over two decades with close to twenty iterations. The series includes discrete graphics processors for desktops and laptops. Fun fact- the name GeForce originally stood for "Geometry Force" since GeForce 256 was the first GPU for personal computers to calculate the transform-and-lighting geometry.
NVIDIA GeForce generations
Here’s a look at NVIDIA’s GeForce lineup:
GeForce 256
The first GeForce GPU in the lineup, the GeForce 256 (NV10) was launched in September 1999 and was the first consumer-level PC graphics chip that shipped with hardware transform, lighting, and shading.
GeForce 2 series
The following year NVIDIA launched the GeForce2 (NV15) that introduced a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. This was followed by the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a reduced cost.
GeForce 3 series
In 2001 NVIDIA launched the GeForce3 (NV20) which introduced programmable vertex and pixel shaders. A version of GeForce 3 codenamed NV2A was developed for the Microsoft Xbox game console.
GeForce 4 series
In February 2002 the GeForce4 Ti (NV25) was launched as a refinement to the GeForce3. It included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. The GeForce4 MX was also introduced as a budget option based on the GeForce2, with the addition of some features from the GeForce4 Ti.
GeForce FX series
The GeForce FX (NV30) introduced a big change in architecture. It brought support for the new Shader Model 2 specification and carried the 5000 model number, as it was the fifth generation of the GeForce family. The series was also infamous for its heating and noisy fan issues.
GeForce 6 series
Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 and fixed the weak floating point shader performance of its predecessor. It additionally implemented high-dynamic-range imaging, SLI (Scalable Link Interface), and PureVideo capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing).
GeForce 7 series
The GeForce 7 series (G70/NV47) was introduced in June 2005 and was the last NVIDIA GPU series to support the AGP bus. It offered a wider pipeline and an increase in clock speed along with new transparency supersampling and transparency multisampling anti-aliasing modes (TSAA and TMAA). A version of the 7950 GT, called the RSX 'Reality Synthesizer', was used as the primary GPU on the Sony PlayStation 3.
GeForce 8 series
The first GeForce (G80) to fully support Direct3D 10, the 8th-gen GeForce series was launched in 2006. It was made using a 90nm process and built around the new Tesla microarchitecture. It was eventually refined and the die size was shrunk down to 65nm. The revised design codenamed G92 was implemented into the 8 series with the 8800GS, 8800GT, and 8800GTS-512, and was launched in 2007.
GeForce 9 series
Revisions for the GeForce 8 series were introduced after a short period in 2008 where the 9800GX2 used two G92 GPUs, in a dual PCB configuration with a single PCI-Express 16x slot. It also included two separate 256-bit memory busses, one for each GPU and a total of 1GB of memory on the card. Later the 9800GTX was launched with a single G92 GPU, 256-bit data bus, and 512 MB of GDDR3 memory.
GeForce 100 series
The following year NVIDIA launched the GeForce 100 series which were essentially rebranded versions of the GeForce 9 series available only for OEMs, although the GTS 150 was briefly available to consumers.
GeForce 200
The GeForce 200 series included the GT200 65nm-based graphics processor that had a total of 1.4 billion transistors and was introduced in 2008. It was also the year when NVIDIA changed its card-naming scheme by replacing the series number with the GTX or GTS suffix and then adding model numbers after that. The series features the new GT200 core on a 65nm die. The GeForce GTX 260 and the GTX 280 were the first products in the series while the GeForce 310 was released in November 2009 as a rebrand of GeForce 210.
GeForce 300 series
The 300 series cards were launched during the same year and were pretty much rebranded versions of the 200 series with support for DirectX 10.1 and based on the newer Fermi architecture. These were limited to OEMs only.
GeForce 400 series
The GeForce 400 series, codenamed GF100 was introduced in 2010 based on the Fermi architecture. They were the first NVIDIA GPUs to utilize 1GB or more of GDDR5 memory. The GTX 470 and GTX 480 were criticized for their high power use, high temperatures, and loud noise. At the same time, the GTX 480 was the fastest DirectX 11 card.
GeForce 500 series
To fix the issues, NVIDIA brought the 500 series with a new flagship (GTX 580) GPU based on an enhanced GF100 architecture (GF110). It offered higher performance, less power utilization, heat, and noise than the preceding GTX 480. Additionally, the GTX 590 was also introduced that packed two GF110 GPUs on a single card.
GeForce 600 series
In 2010, NVIDIA announced the Kepler microarchitecture, manufactured with the TSMC 28nm fabrication process. The company started supplying their top-end GK110 cores for use in Oak Ridge National Laboratory's Titan supercomputer, leading to a shortage of GK110 cores. Eventually, NVIDIA had to use the GK104 core, which was originally intended for the mid-range segment, to power their flagship, the GTX 680. It was followed by the dual-GK104 GTX 690 and the GTX 670.
GeForce 700 series
In May 2013, NVIDIA announced the 700 series based on the Kepler architecture, although it finally featured a GK110 chipset-based card at the top of the lineup. The GTX 780 was a cut-down version of the GTX Titan that achieved nearly the same performance for two-thirds of the price. A week after the release of the GTX 780, NVIDIA announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 which was also based on the GK104 core and similar to the GTX 660 Ti.
GeForce 800M series
The GeForce 800M series included rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture.
GeForce 900 series
In March 2013, NVIDIA announced the new Maxwell microarchitecture. It was released in September 2014 on the GeForce 900 series and was the last series to support analog video output through DVI-I.
GeForce 10 series
In March 2014, NVIDIA announced that the successor to Maxwell would be the Pascal microarchitecture and was finally introduced on the GeForce 10 series in May 2016. It included 128 CUDA cores per streaming multiprocessor, GDDR5X memory, unified memory, and NVLink.
GeForce 20 series
In August 2018, NVIDIA announced Turing architecture as a successor to Pascal. The new microarchitecture was made to accelerate the real-time ray tracing support and AI Inferencing. It included a new Ray Tracing unit (RT Core) which dedicated processors to the ray tracing in hardware. It also supported the DXR extension in Microsoft DirectX 12. The company also introduced DLSS (Deep Learning Super Sampling), a new form of anti-aliasing that used AI to provide sharper imagery with less impact on performance.
The first GPUs to utilize the architecture were primarily aimed at high-end professionals and were introduced under the Quadro series. Eventually, the GeForce RTX series with RTX 2080 Ti, 2080, and 2070 were announced in 2018 followed by the RTX 2060 in January 2019.
In July 2019, NVIDIA announced the GeForce RTX Super line of cards, a refresh of the RTX 20 series which featured higher-spec versions of the RTX 2060, 2070, and 2080.
GeForce 16 series
In February 2019, NVIDIA announced the GeForce 16 series. Based on the same Turing architecture used in the GeForce 20 series, this series omitted the Tensor (AI) and RT (ray tracing) cores. This series continues to offer a more affordable graphics solution for gamers while still attaining a higher performance compared to respective cards of the previous GeForce generations. Similar to the RTX Super refresh, NVIDIA announced the GTX 1650 Super and 1660 Super cards, in October 2019.
GeForce 30 series
The latest and the most powerful graphics cards from NVIDIA, the new 30-series take over from the 20-series and were announced in 2020. It introduced a massive jump over the predecessor and an excellent price to performance ratio. However, getting your hands on one is a difficult task.
Mobile GPUs
NVIDIA produced a wide range of graphics cards for notebooks as far as the GeForce 2 series, under the GeForce Go branding. Most of the features present in the desktop counterparts were made available on the mobile version. With the introduction of the GeForce 8 series, the GeForce Go brand was discontinued and mobile GPUs were now a part of the main GeForce GPUs, with an M suffix. Once again NVIDIA brought some changes and dropped the M suffix in 2016 with the launch of the laptop GeForce 10 series in an attempt to unify the branding between their desktop and laptop GPU offerings. Currently, the RTX 20, GTX 16 and RTX 30 series of GPUs are available as both desktop and laptop variants. NVIDIA also has the GeForce MX range of mobile GPUs intended for lightweight notebooks with entry-level performance.
Nomenclature
Ever since the launch of the GeForce 100 series NVIDIA has been using the following naming scheme for its products:
G, GT, No Prefix - Mostly user for entry-level category of graphics cards with the last two numbers ranging from 00 to 45. Example - GeForce GT 730, GeForce GT 1030
GTS, GTX, RTX - Mid-range category of graphics cards with the last two numbers ranging from 50 to 65. Example - GeForce GTX 1060, GeForce RTX 2060
GTX, RTX - High-end range of graphics cards with the last two numbers ranging from 70-95. Example - GeForce GTX 1080Ti, GeForce RTX 3090
NVIDIA also uses the ‘Super’ or ‘Ti’ suffixes for its graphics cards to signify incremental updates.
Click to expand...
Click to collapse
All GTX is better from GT model, even number is big ! for example GTX 750 is better from GT 1030 also Ti is Strong ..... i use and test computer Hardware and Software for About 30 years ago... i Start from Commodor 64 that must save data(Gw basic) to type drive and then use Spectrom 128 that have 128 k/byte memory
then my first PC is 286 with 10 m byte HDD and 2 m byte RAM with Dos 6.22 OS then i use first windows(windows 3.1)
with my Experience better VGA for game and render is any VGA that have more CUDA core, more RAM BUS Bandwidth and have more ROP/TMU=>(special for Render and movie mixing)
VGA core and ram upper frequency Affection to Speed is lower But more CUDA Core and bigger RAM BUS Bandwidth and more ROP and TMU is Premier for VGA Card, Specially for Nvidia VGA Card.
Sorry for my poor English

Galaxy tab 2 10.1

Iam 76 years old still struggling,iam glad to be back,I am trying to root my wife’s tab2,thanks for your support Mick
mickhdoug said:
Iam 76 years old still struggling,iam glad to be back,I am trying to root my wife’s tab2,thanks for your support Mick
Click to expand...
Click to collapse
A warm welcome to XDA. I hope you'll always get the support you require. And age isn't an issue at all; you're only 10 years more experienced than me.
Did you already check the device forum for a solution?
Samsung Galaxy Tab 2
The Samsung Galaxy Tab 2 is an Android tablet released in the spring of 2012. It has many different versions, with variations coming in capacity, connectivity, and screen size. The screen size options are 7" and 10.1". Storage capacities vary between 16 GB and 32 GB. There are also HSDPA enabled...
forum.xda-developers.com

Categories

Resources