For people who work in technology like me, building a computer is second nature. I often build and destroy two full desktops a week, while those who work in computer repair shops or build custom systems for clients can create a dozen or more computers during the same period. However, if you have never built a computer, the task can prove arduous because it seems far more complex than it actually is.
However, if you want to learn how to build a computer, a Udemy course titled "How to build a computer from scratchAims to teach you how to do it yourself. Unfortunately, it fails so completely and completely that this class is nothing short of a torture for anyone who actually knows how to build a custom PC.
Classes start badly, the lecturer teaching you half-truths about the benefits of building a custom PC versus buying a pre-built system. First, the conference notes that building a custom system will give you the ability to perform more upgrades and change parts over time. The speaker goes on to say that the upgrade possibilities are endless and that the system will have more advanced features. The speaker emphasizes that predefined systems lack these attributes with extremely limited upgrade options and a higher price.
This is only partially true. Many predefined systems can realistically be upgraded in the same way as a custom built system. Each motherboard has its limitations in terms of the processors they can support and the type of RAM they are compatible with, but this is true for all computers, not just pre-built computers. end to the upgrade, but certainly not always. An example of where it may be cheaper to buy a prefabricated system is the OverPowererd DTW2 at Walmart, which is a fully built system that is often available at a lower price than parts.
The speaker then states that you can not change the BIOS settings of a pre-built system without causing problems, but any custom built PCs may have their BIOS modified to improve it. Personally, I feel that creating a custom system is a better option, but I find the amount of false information presented at the beginning of this course extremely confusing.
In section 3, the speaker begins to explain the different parts of a computer. This whole section is rather poorly done. The system used as an example contains absolutely no cable management system. Parts such as RAM are completely hidden. The processor is mentioned, but is not displayed because it is under a heat sink and only the edge of the system hard drive is visible. From that, I do not know how beginners would know what a hard drive, processor or RAM is.
The remaining courses in Section 3 focus on the components one by one and show you what a processor and RAM keys look like. These conferences also have serious problems. For example, the motherboard conference uses an older example from around 1998. Almost none of the components shown in this example are still in use, including the Slot 1 connector, the ISA and AGP connectors, and the Intel chipset. 440BX, the SCSI data. connectors, and much more. It then shows a real AM3 motherboard, then returns to the 1999 diagram for reasons that I can not explain or understand.
Reading the CPU is informative enough without real problems, but the lecturer comes back to showing obsolete hardware in the course of RAM. SDRAM, which stopped being used at the turn of the millennium, has more or less the same appearance as modern RAM, so it's not such a big problem, but it's a strange choice when there are images of modern RAM arrays. This conference is extremely simple, with no discussion of the different types of RAM or RAM settings. Clocks? Latency? Slot compatibility? Non pertinent
The PSU speaker is similar to the RAM conference in that it is extremely simple. According to the speaker, the unit simply needs to have enough power to run the system regardless of the quality of the power supply or its efficiency. There is no discussion about 12V rails or how to calculate if a power supply provides enough amperage for a given high-end board.
Section 4 of this course explains in detail the selection of your components and begins with another lecture on the motherboard. Did you know that Micro-ATX motherboards have almost the same performance as ATX motherboards? I'm sure that it's not the case. The form factor of a motherboard does not alter its performance. It is possible to have an ATX card and a micro-ATX card with identical performance and it is possible that the micro-ATX card is more efficient than the ATX card. Size is not a factor in determining performance, but the speaker says so.
The CPU section of this course is both inaccurate and out of date to be useful. The conference does not really seem to have a clue what he's talking about. It presents Intel's Broadwell processor architecture as an interesting technology offering better multicore performance and the ability to "generate much more powerful graphics." He also mentions Intel Core i3, Core i5, and Core i7 products as having failed for a long time without indicating that he would understand that these products evolve over time as new processors are introduced. By the time these conferences were held, Ryzen had not been published yet and the discussion on AMD was only about the APUs and low prices of the company.
At this point, I jumped to the part of these lectures that really shows you how to build the system. There are more problems here, but in general you will learn to connect the parts together. It's already the longest Udemy class review I've ever written, and now I think you understand this course is awful.
I suppose technically it would show you how to build a computer, but you'd be horribly prepared to select the right components to build a system. Even if you manage to get the correct parts, following these building instructions would leave the system poorly built with many potential problems over the years.
You may think that I may have just chosen a bad course to review because several Udemy courses teach you how to build a PC, but I've specifically chosen this one because he got a high score of users on Udemy of 4.6 / 5. It's incredibly shocking to me and I can not help but wonder if a lot of these critics have been falsified in order to improve sales of this course.
I can not say how terrible this course is. I consider it dangerous and expensive to approach the establishment of a modern system based on the information contained in these conferences. It is impossible to find a worse source of information, and it would be much better to buy a pre-built system than to try to build one yourself after taking this course. If you really want to learn how to build a PC, look elsewhere.
In this respect, it should be noted that the type PCIe These are the ones that offer the highest speeds from 1000 to 3500 MB / s that can be found in a PCIe 3.0 NVMe that can be found on any computer today, up to several Go / s already guaranteed by some manufacturers, that we will reach soon.
A month ago, the first cards with PCIe 4.0 bus that allow to offer speeds up to 5 GB / s and today it is already said that a speed of 7 GB / s. Specifically, a little less than two months ago, Phison said we would see SSDs larger than 6GB / s in a very short time. A few weeks later, it's Phison himself who ensures we'll have 7GB / s SSD coming soon.
For this, he has already been working for a long time in the new generation of the Phison controller that will be one of the responsible for achieving this speed in SSD PCIe 4.0. Recall that the E16 is the controller used today in PCIe 4.0 SSDs, but that it will be the new generation E18 that will get the speed of up to 7 GB / s on PCIe 4.0. As a result, Phison would move closer to PCIe 4.0 limits with its E18 controller, supporting a maximum of eight memory channels and up to 8 TB of memory.
Phison itself confirmed that the next generation of its controller, the PS5018-E18, would be available from the second quarter of 2020. So we can talk about these speeds in a SSD of the next spring
Written by Roberto Adeva
At ExtremeTech, we talk a lot about process nodes, but we do not often refer to what a process node technically represents. is. As Intel's 10nm node moves into production, I noticed a slight rise in the number of conversations around this issue and confusion as to whether TSMC and Samsung have a manufacturing advantage over Intel. (and, if so, to what extent they have a considerable advantage).
The process nodes are usually named with a number followed by the abbreviation of nm: 32 nm, 22 nm, 14 nm, and so on. There is no objective and fixed relation between the characteristics of the CPU and the name of the node. This was not always the case. Between the 1960s and the late 1990s, nodes were named according to the length of their doors. This IEEE chart shows the relationship:
For a long time, the length of the gate (length of the transistor gate) and the half-step (half the distance between two identical entities of a chip) corresponded to the name of the process node, but the last time where that was true was 1997 The semitone has continued to correspond to the name of the knot for several generations, but has no relation with it in a practical sense. In fact, it's been a long time since our geometric resizing of the processor nodes matches what the curve would look like if we could continue. effectively reduce the size of the features.
If we meet the geometric scaling requirements to keep the actual node names and synchronized feature sizes, we would have fallen below manufacturing at 1 nm six years ago. The numbers we use to designate each new node are just numbers chosen by the companies. In 2010, the ITRS (more about them in a moment) was referring to the technology used by each node to allow an "equivalent scaling". By the end of the nanoscale, companies can start referring to angstroms instead of nanometers, or we can just start using decimal points. When I started working in this industry, it was much more common to see reporters referring to process nodes in microns instead of nanometers – 0.18 micron or 0.13 micron, for example, at instead of 180 nm or 130 nm.
Semiconductor manufacturing involves considerable investment and a lot of long-term research. The average time between the time when a new technology approach is introduced in a paper and the time when it goes into large-scale industrial production is in the range of 10 to 15 years. Decades ago, the semiconductor industry recognized that it would be in everyone's interest to have a general roadmap for the introduction of nodes and the size of features that these nodes would target. This would allow the simultaneous development of all the pieces of the puzzle needed to put on the market a new node. For many years, the ITRS – the international technology roadmap for semiconductors – has published a general roadmap for the sector. These roadmaps spanned 15 years and set broad goals for the semiconductor market.
The ITRS was published from 1998 to 2015. From 2013 to 2014, the ITRS reorganized itself into ITRS 2.0, but quickly recognized that its mandate was to provide "the main reference for the ITS". 39 future, university researchers, consortiums and industry to stimulate innovation in various areas of technology. "Obligation for the organization to significantly expand its scope and coverage. The ITRS was retired and a new organization was created, IRDS – International Roadmap of Appliances and Systems – with a much broader mandate covering a wider range of technologies.
This change in scope and direction reflects what is happening in the foundry industry. The reason we stopped linking the length of the door or semitone to the size of the knot is that they either stopped scaling or started much slower. Instead, companies have incorporated a variety of new technologies and manufacturing approaches to allow for continuous scaling of the nodes. At 40/45 nm, companies like GF and TSMC introduced immersion lithography. The dual configuration was introduced at 32 nm. The manufacturing at the last gate was a characteristic of 28 nm. The FinFETs were introduced by Intel at 22 nm and the rest of the industry by the 14/16 nm node.
Companies sometimes introduce features and capabilities at different times. AMD and TSMC introduced 40 / 45nm immersion lithography, but Intel waited 32nm to use this technique, preferring to first deploy the dual structure. GlobalFoundries and TSMC have begun to use the dual configuration at 32/28 nm more. TSMC used Gate-last construction at 28 nm, while Samsung and GF used Gate-first technology. But as progress slows down, we've seen businesses rely more on marketing, with more "nodes" defined. Instead of collapsing on a fairly large digital space (90, 65, 45), companies such as Samsung are launching nodes. which are just on top of each other, numerically:
I think you can say that this product strategy is not very clear, because there is no way to know which process nodes are advanced variants of the previous nodes, unless you have the graph within range of hand.
Although node names are not attached Depending on the size of the features and the scaling of certain features, semiconductor manufacturers are still looking for ways to improve key metrics. It's a real technical improvement. But because benefits are harder to obtain and take longer to develop, companies are experimenting more with so-called improvements. Samsung, for example, has many more node names than before. It's marketing.
Intel's 10 nm manufacturing process parameters are very close to the values that TSMC and Samsung use for what they call a 7 nm process. The graph below is from WikiChip, but combines the known feature sizes of the Intel 10nm node with the known feature sizes of TSMC and Samsung's 7nm node. As you can see, they are very similar:
The delta 14 nm / delta 10 nm column indicates how far each company has reduced a given feature compared to its previous node. Intel and Samsung have a narrower minimum metal pitch than TSMC, but TSMC's high-density SRAM cells are smaller than Intel's, probably reflecting the needs of different customers of the Taiwanese foundry. Samsung's cells, meanwhile, are even smaller than those of TSMC. Overall, however, Intel's 10-nm process reaches a number of key metrics, such as what TSMC and Samsung call 7 nm.
Some chips may still have different characteristics of these sizes because of particular design goals. The information provided by the manufacturers on these figures relates to a typical implementation expected on a given node and does not necessarily correspond to a specific chip.
The relevance of Intel's 10Nm + process (used for Ice Lake) (which I believe was published for Cannon Lake) was questioned. It is true that the expected specifications for Intel's 10nm node may have changed slightly, but 14nm + also corresponded to a 14nm fit. Intel said it was still aiming at a scaling factor of 2.7x for 10nm versus 14nm, so we stick to speculation about the possibility that 10nm + is slightly different.
The best way to understand the meaning of a new process node is to think of it as a generic term. When a foundry talks about deploying a new process node, its purpose boils down to this:
"We have created a new manufacturing process with smaller features and closer tolerances. To achieve this goal, we have integrated new manufacturing technologies. We refer to this set of new manufacturing technologies as a process node because we want a generic term that captures the idea of progress and better capabilities. "
Any additional questions on the subject? Drop them below and I will answer them.
A word that is repeated among manufacturers when they announce a curved monitor is "immersion". A curved monitor introduces you further into the action; especially if the monitor is panoramic. This is where we know that an ultra-panoramic monitor needs to be bent to be able to be more immersive.
In a 16: 9 monitor, it is normal for you to have doubts, and probably if you decide to choose between one monitor or another with this aspect ratio. However, as soon as we move to the ultra-panoramic systems, there is almost no doubt: a curved monitor is much better, because, given the distance that separates it from one corner to another of the monitor, we find that the content is farther from our view than if we are doing it with a curved monitor, which reduces the consistency of the color. This is particularly noticeable when compared to a TN monitor.
Unlike curved TVs, where seeing them lightly on the side destroys the quality of the image by glare, this does not happen on monitors, because of the type of coating used by the panels, which is dull in the most cases. This even allows you to place a curved secondary monitor, which allows you to have the image more "around" the head and not so far.
Among the curved monitors, we find those who use a Report 21: 9, although there are also real crazy things like that of 32: 9 that Samsung It's on the market and that's like having two 16: 9 monitors together.
It also depends on the type of situations in which you are going to use it, since it is not the same thing to buy a monitor to work as to retouch a picture or play. In the case of buying a monitor to work and use multiple pieces of text at a time, we will not use curved or flat monitor. Even for work, we might want to use a curved ultra-panoramic, because we can often do without a second monitor.
However for edit photos or create a video the thing changes, because we can have problems to correct perspectives or other elements. You will never see a professional photographer use a curved monitor, even though almost everyone uses panels IPS or VA and have a good color reproduction.
This leads us to curved screens that reduce the distortion of light, since a flat screen displays light linearly, while a curved screen directs it further where we are. This also helps to cover a larger viewing angle, so that curved monitors can and are larger. Therefore, a 27-inch flat monitor typically has a curved equivalent of 31.5 inches, occupying the same vertical space thanks to the curvature.
In the case of games, the winners are the curved instructors. The immersion is unmatched and we have more horizontal resolution in the game. In addition to consuming content, we can see the full size movies without black stripesHowever, with the 16: 9 format videos in the series, we will have large black stripes on the sides.
The curvature of the monitor also depends, since a slight curvature may be more than enough to play and not notice it as much. There are several types of curvature being 1800R or 2300R the most common. This number indicates the radius of curvature measured in millimeters. Thus, 1800R (ideal for curved monitors) is a curvature more pronounced than 2300R. Unlike televisions, the curvature of the monitors does not have to be very large as they are closer to the eyes and the panels are smaller.
Therefore, assuming you will not retouch the photos, why not buy a curved monitor? Well, there are other reasons that must also be taken into account.
We have the price first. Curved control panels are more expensive and further complicate the design and assembly process. Each time there is cheaper, but as a rule, they tend to be a bit more expensive, especially since most curves use VA panels, while gaming planes are usually almost all TNs in the ranges lower prices.
This difference also leads us to say that these monitors have a minimum response time of 4 ms, contrary to 1 ms TN, and can also present the strange problem of ghosting. Another important aspect is where you are going to place it, since a curved monitor will have a very bad appearance on a wall compared to a flat monitor.
There are curved monitors of all kinds of prices and benefits. Among the most basic Full HD, we have as the Samsung LC24F396FHU, 24 inches and Full HD priced at 129 euros. 27 inches we found the Philips 278E8QJAB for 158.99 euros, and for 199 euros we can go up to 32 inches with the Samsung C32F391FWU. If we want to go up to 144 Hz, the HP 27x With 1800R, curvature is another good option for 254 euros.
In the high range, we can go to monitors like the Samsung C32JG56QQUX, 32 inches, WQHD, 144 Hz, 4 ms and with HDR, priced at 499 euros. If we want a 4K, the Samsung U32R590C It is one of the cheapest for 369 euros.
In ultra-panoramic curved game monitors, one of the most comprehensive on the market is the BenQ EX3501R, with 35 inches, 100 Hz, HDR and a resolution of 3440 x 1440 pixels, for a tight price of 599 euros.
Sales data is a clear example of market trends and one of the best samples is that proposed by the Mindfactory German Hardware Store, which publishes monthly sales data for Intel and AMD processors. And in recent years, the red company has disappeared as it probably did not do in its history.
The gaming community was waiting for the AMD Ryzen 3000 to change processor, which explains the drop in sales in April, May and June. So much so that, in June, 9,500 AMD processors and 4,500 Intel processors were sold in the German store and in July, 18,000 AMD and 5,000 Intel.
Processors AMD goes on sale July 7, and in just 24 days, they got a better sales month since the store has records, taking a 79% of them for only 21% of Intel. In addition, a large portion of these sales came from Ryzen 3000 processors, where the Ryzen 7 3700X It has been the best selling of all, because it offers the best performance / price ratio in the high end with its 8 cores and 16 wires up to 4.4 GHz.
In multicore performance, it is at 9900K, which costs 127 euros more (361.90 against 489). With this price difference, we compensate for the purchase of a better graphics card. So, assuming a budget of 900 euros for a processor and a graphics card, you will get more performance if you buy a Ryzen 7 3700X and an RTX 2070 SUPER that if you buy a i9-9900K and an RTX 2060 SUPER. And let the community know. The interest that this chip arouses is soOnly the Ryzen 7 3700X has sold nearly the same as Intel among all its processors.
After the 3700X, the best selling processors were the Ryzen 5 3600 and the Ryzen 5 2600, which received the latest offers and discounts that left him at only 120 euros and 2 gift boxes. Very difficult to fight against that. In addition, 79% of AMD sales, of which 72% belong to Ryzen transformers: 50% Ryzen 3000 and 32% Ryzen 2000.
In the case of Intel, the best sellers they were the three most powerful, with the 9900K (and KF), 9700K and 9600K, and among the three transformers and all those of Coffee Lake-R (which account for 70% of sales) do not reach sales that only Ryzen 7 3700X has achieved.
As we can see, this is great news for AMD, which has already surpassed Intel multicore performance with the arrival of Ryzen. Now, in addition to increasing its advantage in this type of task, it is getting closer to multicore performance. And Intel still has no answer at the table, we must wait at least the end of the year to see another 14 nm cycle with Comet Lake, which will go up to 10 hearts.
Or Intel continues to dominate right now is in laptopsYes, but manufacturers are integrating more and more AMD processors and discovering very cheap laptops with Ryzen chips like this one. ASUS R570ZD-DM266, with a Ryzen 5-2500U, 8GB of RAM, a 256GB SSD and a GTX 1050 to 459 euros.
The current 7 nm process of TSMC is called N7. In the future, they will launch N7 +, which will be the first process to use EUV Lithography (Extreme Ultraviolet). The new process put in place by the company is the N7P, which is an optimization of the N7 and uses the same design rules and compatible with the current N7.
This improvement will encourage manufacturers to opt for this process rather than give way to the N7 + for now and, in the future, make the jump to 5 or 6 nm with EUV. Samsung is the most advanced with EUV, and TSMC uses this technology in some process layers. With 6 nm, they will take a big step forward in terms of the number of layers created with this new technology.
Thanks to this, the process N7P allows a 7% increase in chip performance, or one Decrease in consumption by 10% at the same performance. This jump is quite important, especially considering that there is no change that manufacturers have to adapt to. N7 + This will be a major jump, improving the density by 20% for 10% more efficiency or 15% less consumption. However, this involves a new design and redesign of the implementation of the chip. With that, it will be curious to see what Apple finally opts for this year, although the most logical would be to choose N7P. AMD could even give way to this new node in the 3950X which will launch next month.
Regarding the 5 nm, we will have to wait until next year to see the first tokens using N5. The improvement will be very important, since the density will increase by 80% compared to N7 and that the yield will improve between 15 and 25%, otherwise the consumption will decrease by 30%. Its duration in time will also be much greater than that of 7 nm.
After N5, N5P will arrive, which will improve performance by 7% or reduce consumption by 15% compared to N5. N5P will arrive in principle by the end of 2021
Written by Alberto Garcia