Jump to content

Search the Community

Showing results for tags 'memory'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 12 results

  1. Why can’t I remember? Model may show how recall can fail Model may predict why you can’t recall what you know you remember. Enlarge Serdar Acar / EyeEm Physicists can create serious mathematical models of stuff that is very far from physics—stuff like biology or the human brain. These models are hilarious, but I'm still a sucker for them because of the hope they provide: maybe a simple mathematical model can explain the sexual choices of the disinterested panda? (And, yes, I know there is an XKCD about this very topic). So a bunch of physicists who claimed to have found a fundamental law of memory recall was catnip to me. To get an idea of how interesting their work is, it helps to understand the unwritten rules of “simple models for biology.” First, the model should be general enough that the predictions are vague and unsatisfying. Second, if you must compare with experimental data, do it on a logarithmic scale so that huge differences between theory and experiment at least look tiny. Third, if possible, make the mathematical model so abstract that it loses all connection to the actual biology. By breaking all of these rules, a group of physicists has come up with a model for recall that seems to work. The model is based on a concrete idea of how recall works, and, with pretty much no fine-tuning whatsoever, it provides a pretty good prediction for how well people will recall items from a list. Put your model on the catwalk It's widely accepted that memories are encoded in networks of neurons. We know that humans have a remarkable capacity to remember events, words, people, and many other things. Yet some aspects of recall are terrible. I’ve been known to blank on the names of people I’ve known for a decade or more. But even simpler challenges fail. Given a list of words, for instance, most people will not recall the entire list. In fact, a remarkable thing happens. Most people will start by recalling words from the list. At some point, they will loop back and recall a word they’ve already said. Every time this happens, there is a chance that it will trigger another new word; alternately, the loop could start to cycle over other words already recalled. The more times a person loops back, the higher the chance that no new words will be recalled. Based on these observations, the researchers created a model based on similarity. Each memory is stored in a different but overlapping network of neurons. Recall jumps from a starting point to the next item that has the greatest network overlap with the previous item. The process of recall suppresses the jump back to the item that had just been recalled previously, which would have the most overlap. By using those simple rules, recall follows a trajectory that loops back on itself at some random interval. However, if recall were completely deterministic, the first loop back to a word that was already recalled would result in an endless repetition of the same few items. To prevent this, the model is probabilistic, not deterministic: there is always a chance of jumping to a new word and breaking out of a loop. Boiling all this down, the researchers show that, given a list of items of a known length, the model predicts the average number of items that can be recalled. There is no fine-tuning here at all: if you take the model above and explore the consequences, you get a fixed relationship between list length and number of items recalled. That's pretty amazing. But is it true? Experiments are messy At first sight, some experiments immediately contradict the researcher’s model. For instance, if the subject has a longer period of time to look at each word on the list, they will recall more words. Likewise, age and many other details influence recall. But the researchers point out that their model assumes that every word in the list is stored in memory. In reality, people are distracted. They may miss words entirely or simply not store the words they see. That means that the model will always overestimate the number of words that can be recalled. To account for this, the researchers performed a second set of experiments: recognition tests. Some subjects did a standard recall test. They were shown a list of words sequentially and asked to recall as many words as possible. Other subjects were shown a list of words sequentially, then shown words in random order and asked to choose which words were on the list. The researchers then used their measured recognition data to set the total number of words memorized. With this limit, the agreement between their theoretical calculations and experiments is remarkable. The data seems to be independent of all parameters other than the length of the list, just as the model predicts. The result also seems to tell us that the variation in experimental data observed in previous experiments is not in recall but in memorization. A delicate point So what does the model tell us? It may provide some insight into the actual mechanisms of recall. It may also point to how we can construct and predict the behavior of neural-network-based memories. But (and maybe this is my failure of imagination) I cannot see how you would actually use the model beyond what it already tells us. Physical Review Letters, 2020, DOI: 10.1103/PhysRevLett.124.018101 (About DOIs) Source: Why can’t I remember? Model may show how recall can fail (Ars Technica)
  2. PC DDR4 pricing back to where it was in Oct 2016. Further declines expected in Q1 and Q2. Technology market intelligence firm TrendForce, or more precisely its DRAMeXchange division, has published what looks to be some good news for PC enthusiasts. Last year we saw signs that this might happen, but now some trends have been set in motion which should deliver "significant price declines for DRAM products during 1H19". In brief, there are both seasonal and oversupply factors in play here and it is possible that we will see PC DRAM price decline of 20 per cent or so by the end of Q1 2019. DRAMeXchange notes that Contract prices of DRAM products across all major application markets already registered declines of more than 15 per cent month-on-month in January, and they will continue their descent in February and March. If that is really the case then a 20 per cent drop in Q1 looks to be a guarded estimate. Looking ahead to Q2 2019 DRAMeXchange reckons the oversupply situation will persist and mainstream DRAM products will drop by an average further 15 per cent in this period. We all know new tech, especially new smartphone releases, can eat up available DRAM supply but demand related to 5G, AIoT, IIoT, and automotive electronics still is in the growth stage, while the smartphone market has decelerated as people hold onto mobiles for longer due to lack of innovation. It remains to be seen if Android phone makers can produce devices compelling enough at the upcoming MWC to spur a flurry of updates. Considering PC market specifics, DRAMeXchange observes that the average contract price of mainstream 8GB PC DRAM modules is on its way to under US$45 at the time of reporting. Similarly server RAM oversupply has affected the market, in this case even more severely as it is expected to see a price decline of nearly 30 per cent QoQ. PC DDR4 pricing in the UK today All the above talk of contract and spot prices of DRAM might seem a little detached from what we actually pay for PC memory modules. Therefore it is worth a look at the trends on sites like CamelCamelCamel, which back up the analyst charts, and trends. For example, here in the UK, Crucial 4GB DDR4 2400 MT/s memory modules are at their cheapest price since October 2016, at £24.28. I also checked out the Corsair Vengeance LPX 8GB DDR4 2400MHz C16 module pricing. As you can see that is also trending nicely for would-be buyers at £48.62. Again, this is probably the best pricing we have seen for these modules since October 2016. Should you therefore wait a little longer if you are thinking about upgrading your PC RAM? According to the TrendForce analysts, it would seem like the answer to that is yes. No advice intended, remember 'stuff' happens, and the GBP could move very strongly one way or another in the coming weeks, for example. View: Original Article.
  3. Firemin v6.1.0.5020 (2018-05-17) Having Firefox memory issues? Is it using over 1GB of your precious computer memory? Download Firemin and control the amount of memory Firefox uses. If it works for you, use it and if it does not, don’t use it. It is as simple as that! Read more Author’s Homepage: https://www.rizonesoft.com/ Author’s Download: https://www.rizonesoft.com/downloads/ Firemin Download: https://www.rizonesoft.com/downloads/firemin/ Firemin Latest Changelog: https://www.rizonesoft.com/downloads/firemin/update/ All Changelogs: https://github.com/rizonesoft/Firemin/blob/master/Docs/Changes.txt Direct Download: Download: Firemin_5020_Setup.zip Portable: Firemin_5020.zip Version: 6.1.0.5020 Updated: May 17, 2018 File size: 2 MB License: Open Source Requirements: Windows® 7, 8 / 8.1, 10 (32 and 64 bit) RIZONESOFT SOFTWARE If you experience issues with our software or have some suggestions, report it on: on GitHub here. Changes in version 6.1.0.5020 (2018-05-17):
  4. Firemin v6.1.0.4935 (2018-03-12) Having Firefox memory issues? Is it using over 1GB of your precious computer memory? Download Firemin and control the amount of memory Firefox uses. If it works for you, use it and if it does not, don’t use it. It is as simple as that! Read more Author’s Homepage: https://www.rizonesoft.com/ Author’s Download: https://www.rizonesoft.com/downloads/ Firemin Download: https://www.rizonesoft.com/downloads/firemin/ Firemin Latest Changelog: https://www.rizonesoft.com/downloads/firemin/update/ All Changelogs: https://github.com/rizonesoft/Firemin/blob/master/Docs/Changes.txt Direct Download: Download: Firemin_4935_Setup.zip Portable: Firemin_4935.zip Version: 6.1.0.4935 Updated: March 12, 2018 File size: 2 MB License: Open Source Requirements: Windows® 7, 8 / 8.1, 10 (32 and 64 bit) RIZONESOFT SOFTWARE If you experience issues with our software, have some suggestions or just want to say thank you; post it here. You can also report an issue on GitHub here. Latest Changelog: V6.1.0.4935 (2018-03-12):
  5. In my previous posts about the two Bitdefender bugs related to 7z, I explicitly mentioned that Igor Pavlov’s 7-Zip reference implementation was not affected. Unfortunately, I cannot do the same for the bugs described in this blog post. I found these bugs during the analysis of a prominent antivirus product. As the vendor has not yet published a patch I will add the name of the affected product in an update to this post as soon as this happens. Since Igor Pavlov has already published a patched version of 7-Zip and exploitation is likely to be easier for 7-Zip, I figured it would be best to publish this post as soon as possible. Introduction In the following, I will outline two bugs that affect 7-Zip before version 18.00 as well as p7zip. The first one (RAR PPMd) is the more critical and the more involved one. The second one (ZIP Shrink) seems to be less critical, but also much easier to understand. Memory Corruptions via RAR PPMd (CVE-2018-5996) 7-Zip’s RAR code is mostly based on a recent UnRAR version. For version 3 of the RAR format, PPMd can be used, which is an implementation of the PPMII compression algorithm by Dmitry Shkarin. If you want to learn more about the details of PPMd and PPMII, I’d recommend Shkarin’s paper PPM: one step to practicality1. Interestingly, the 7z archive format can be used with PPMd as well, and 7-Zip uses the same code that is used for RAR3. As a matter of fact, this is the very PPMd implementation that was used by Bitdefender in a way that caused a stack based buffer overflow. In essence, this bug is due to improper exception handling in 7-Zip’s RAR3 handler. In particular, one might argue that it is not a bug in the PPMd code itself or in UnRAR’s extraction code. The outlined heap and stack memory corruptions are only scratching the surface of possible exploitation paths. Most likely there are many other and possibly even neater ways of causing memory corruptions in an attacker controlled fashion. This bug demonstrates again how difficult it can be to integrate external code into an existing code base. In particular, handling exceptions correctly and understanding the control flow they induce can be challenging. In the post about Bitdefender’s PPMd stack buffer overflow, I already made clear that the PPMd code is very fragile. A slight misuse of its API, or a tiny mistake while integrating it into another code base may lead to multiple dangerous memory corruptions. If you use Shkarin’s PPMd implementation, I would strongly recommend you to harden it by adding out of bound checks wherever possible, and to make sure the basic model invariants always hold. Moreover, in case exceptions are used, one could add an additional error flag to the model that is set to true before updating the model, and only set to false after the update has been successfully completed. This should significantly mitigate the danger of corrupting the model state. Programmers Can Read Much More In This Long Article
  6. mona

    How to see a MEMORY

    [Helen Shen] How to see a memory Every memory leaves its own imprint in the brain, and researchers are starting to work out what one looks like. For someone who’s not a Sherlock superfan, cognitive neuroscientist Janice Chen knows the BBC’s hit detective drama better than most. With the help of a brain scanner, she spies on what happens inside viewers’ heads when they watch the first episode of the series and then describe the plot. Chen, a researcher at Johns Hopkins University in Baltimore, Maryland, has heard all sorts of variations on an early scene, when a woman flirts with the famously aloof detective in a morgue. Some people find Sherlock Holmes rude while others think he is oblivious to the woman’s nervous advances. But Chen and her colleagues found something odd when they scanned viewers’ brains: as different people retold their own versions of the same scene, their brains produced remarkably similar patterns of activity. Chen is among a growing number of researchers using brain imaging to identify the activity patterns involved in creating and recalling a specific memory. Powerful technological innovations in human and animal neuroscience in the past decade are enabling researchers to uncover fundamental rules about how individual memories form, organize and interact with each other. Using techniques for labelling active neurons, for example, teams have located circuits associated with the memory of a painful stimulus in rodents and successfully reactivated those pathways to trigger the memory. And in humans, studies have identified the signatures of particular recollections, which reveal some of the ways that the brain organizes and links memories to aid recollection. Such findings could one day help to reveal why memories fail in old age or disease, or how false memories creep into eyewitness testimony. These insights might also lead to strategies for improved learning and memory. The work represents a dramatic departure from previous memory research, which identified more general locations and mechanisms. “The results from the rodents and humans are now really coming together,” says neuroscientist Sheena Josselyn at the Hospital for Sick Children in Toronto, Canada. “I can’t imagine wanting to look at anything else.” In search of the engram The physical trace of a single memory — also called an engram — has long evaded capture. US psychologist Karl Lashley was one of the first to pursue it and devoted much of his career to the quest. Beginning around 1916, he trained rats to run through a simple maze, and then destroyed a chunk of cortex, the brain’s outer surface. Then he put them in the maze again. Often the damaged brain tissue made little difference. Year after year, the physical location of the rats’ memories remained elusive. Summing up his ambitious mission in 1950, Lashley wrote2: “I sometimes feel, in reviewing the evidence on the localization of the memory trace, that the necessary conclusion is that learning is just not possible.” Memory, it turns out, is a highly distributed process, not relegated to any one region of the brain. And different types of memory involve different sets of areas. Many structures that are important for memory encoding and retrieval, such as the hippocampus, lie outside the cortex — and Lashley largely missed them. Most neuroscientists now believe that a given experience causes a subset of cells across these regions to fire, change their gene expression, form new connections, and alter the strength of existing ones — changes that collectively store a memory. Recollection, according to current theories, occurs when these neurons fire again and replay the activity patterns associated with past experience. Scientists have worked out some basic principles of this broad framework. But testing higher-level theories about how groups of neurons store and retrieve specific bits of information is still challenging. Only in the past decade have new techniques for labelling, activating and silencing specific neurons in animals allowed researchers to pinpoint which neurons make up a single memory (see ‘Manipulating memory’). Credit: Jasiek Krzysztofiak/Nature ‹click to enlarge› Josselyn helped lead this wave of research with some of the earliest studies to capture engram neurons in mice3. In 2009, she and her team boosted the level of a key memory protein called CREB in some cells in the amygdala (an area involved in processing fear), and showed that those neurons were especially likely to fire when mice learnt, and later recalled, a fearful association between an auditory tone and foot shocks. The researchers reasoned that if these CREB-boosted cells were an essential part of the fear engram, then eliminating them would erase the memory associated with the tone and remove the animals’ fear of it. So the team used a toxin to kill the neurons with increased CREB levels, and the animals permanently forgot their fear. A few months later, Alcino Silva’s group at the University of California, Los Angeles, achieved similar results, suppressing fear memories in mice by biochemically inhibiting CREB-overproducing neurons4. In the process, they also discovered that at any given moment, cells with more CREB are more electrically excitable than their neighbours, which could explain their readiness to record incoming experiences. “In parallel, our labs discovered something completely new — that there are specific rules by which cells become part of the engram,” says Silva. But these types of memory-suppression study sketch out only half of the engram. To prove beyond a doubt that scientists were in fact looking at engrams, they had to produce memories on demand, too. In 2012, Susumu Tonegawa’s group at the Massachusetts Institute of Technology in Cambridge reported creating a system that could do just that. By genetically manipulating brain cells in mice, the researchers could tag firing neurons with a light-sensitive protein. They targeted neurons in the hippocampus, an essential region for memory processing. With the tagging system switched on, the scientists gave the animals a series of foot shocks. Neurons that responded to the shocks churned out the light-responsive protein, allowing researchers to single out cells that constitute the memory. They could then trigger these neurons to fire using laser light, reviving the unpleasant memory for the mice5. In a follow-up study, Tonegawa’s team placed mice in a new cage and delivered foot shocks, while at the same time re-activating neurons that formed the engram of a ‘safe’ cage. When the mice were returned to the safe cage, they froze in fear, showing that the fearful memory was incorrectly associated with a safe place6. Work from other groups has shown that a similar technique can be used to tag and then block a given memory. This collection of work from multiple groups has built a strong case that the physiological trace of a memory — or at least key components of this trace — can be pinned down to specific neurons, says Silva. Still, neurons in one part of the hippocampus or the amygdala are only a tiny part of a fearful foot-shock engram, which involves sights, smells, sounds and countless other sensations. “It’s probably in 10–30 different brain regions — that’s just a wild guess,” says Silva. A broader brush Advances in brain-imaging technology in humans are giving researchers the ability to zoom out and look at the brain-wide activity that makes up an engram. The most widely used technique, functional magnetic resonance imaging (fMRI), cannot resolve single neurons, but instead shows blobs of activity across different brain areas. Conventionally, fMRI has been used to pick out regions that respond most strongly to various tasks. But in recent years, powerful analyses have revealed the distinctive patterns, or signatures, of brain-wide activity that appear when people recall particular experiences. “It’s one of the most important revolutions in cognitive neuroscience,” says Michael Kahana, a neuroscientist at the University of Pennsylvania in Philadelphia. The development of a technique called multi-voxel pattern analysis (MVPA) has catalysed this revolution. Sometimes called brain decoding, the statistical method typically feeds fMRI data into a computer algorithm that automatically learns the neural patterns associated with specific thoughts or experiences. As a graduate student in 2005, Sean Polyn — now a neuroscientist at Vanderbilt University in Nashville, Tennessee — helped lead a seminal study applying MVPA to human memory for the first time9. In his experiment, volunteers studied pictures of famous people, locations and common objects. Using fMRI data collected during this period, the researchers trained a computer program to identify activity patterns associated with studying each of these categories. Later, as subjects lay in the scanner and listed all the items that they could remember, the category-specific neural signatures re-appeared a few seconds before each response. Before naming a celebrity, for instance, the ‘celebrity-like’ activity pattern emerged, including activation of an area of the cortex that processes faces. It was some of the first direct evidence that when people retrieve a specific memory, their brain revisits the state it was in when it encoded that information. “It was a very important paper,” says Chen. “I definitely consider my own work a direct descendant.” Chen and others have since refined their techniques to decode memories with increasing precision. In the case of Chen’s Sherlock studies, her group found that patterns of brain activity across 50 scenes of the opening episode could be clearly distinguished from one another. These patterns were remarkably specific, at times telling apart scenes that did or didn’t include Sherlock, and those that occurred indoors or outdoors. Near the hippocampus and in several high-level processing centres such as the posterior medial cortex, the researchers saw the same scene-viewing patterns unfold as each person later recounted the episode — even if people described specific scenes differently1. They even observed similar brain activity in people who had never seen the show but had heard others’ accounts of it. “It was a surprise that we see that same fingerprint when different people are remembering the same scene, describing it in their own words, remembering it in whatever way they want to remember,” says Chen. The results suggest that brains — even in higher-order regions that process memory, concepts and complex cognition — may be organized more similarly across people than expected. Melding memories As new techniques provide a glimpse of the engram, researchers can begin studying not only how individual memories form, but how memories interact with each other and change over time. At New York University, neuroscientist Lila Davachi is using MVPA to study how the brain sorts memories that share overlapping content. In a 2017 study with Alexa Tompary, then a graduate student in her lab, Davachi showed volunteers pictures of 128 objects, each paired with one of four scenes — a beach scene appeared with a mug, for example, and then a keyboard; a cityscape was paired with an umbrella, and so on. Each object appeared with only one scene, but many different objects appeared with the same scene11. At first, when the volunteers matched the objects to their corresponding scenes, each object elicited a different brain-activation pattern. But one week later, neural patterns during this recall task had become more similar for objects paired with the same scene. The brain had reorganized memories according to their shared scene information. “That clustering could represent the beginnings of learning the ‘gist’ of information,” says Davachi. Clustering related memories could also help people use prior knowledge to learn new things, according to research by neuroscientist Alison Preston at the University of Texas at Austin. In a 2012 study, Preston’s group found that when some people view one pair of images (such as a basketball and a horse), and later see another pair (such as a horse and a lake) that shares a common item, their brains reactivate the pattern associated with the first pair12. This reactivation appears to bind together those related image pairs; people that showed this effect during learning were better at recognizing a connection later — implied, but never seen — between the two pictures that did not appear together (in this case, the basketball and the lake). “The brain is making connections, representing information and knowledge that is beyond our direct observation,” explains Preston. This process could help with a number of everyday activities, such as navigating an unfamiliar environment by inferring spatial relationships between a few known landmarks. Being able to connect related bits of information to form new ideas could also be important for creativity, or imagining future scenarios. In a follow-up study, Preston has started to probe the mechanism behind memory linking, and has found that related memories can merge into a single representation, especially if the memories are acquired in close succession13. In a remarkable convergence, Silva’s work has also found that mice tend to link two memories formed closely in time. In 2016, his group observed that when mice learnt to fear foot shocks in one cage, they also began expressing fear towards a harmless cage they had visited a few hours earlier14. The researchers showed that neurons encoding one memory remained more excitable for at least five hours after learning, creating a window in which a partially overlapping engram might form. Indeed, when they labelled active neurons, Silva’s team found that many cells participated in both cage memories. These findings suggest some of the neurobiological mechanisms that link individual memories into more general ideas about the world. “Our memory is not just pockets and islands of information,” says Josselyn. “We actually build concepts, and we link things together that have common threads between them.” The cost of this flexibility, however, could be the formation of false or faulty memories: Silva’s mice became scared of a harmless cage because their memory of it was formed so close in time to a fearful memory of a different cage. Extrapolating single experiences into abstract concepts and new ideas risks losing some detail of the individual memories. And as people retrieve individual memories, these might become linked or muddled. “Memory is not a stable phenomenon,” says Preston. Researchers now want to explore how specific recollections evolve with time, and how they might be remodelled, distorted or even recreated when they are retrieved. And with the ability to identify and manipulate individual engram neurons in animals, scientists hope to bolster their theories about how cells store and serve up information — theories that have been difficult to test. “These theories are old and really intuitive, but we really didn’t know the mechanisms behind them,” says Preston. In particular, by pinpointing individual neurons that are essential for given memories, scientists can study in greater detail the cellular processes by which key neurons acquire, retrieve and lose information. “We’re sort of in a golden age right now,” says Josselyn. “We have all this technology to ask some very old questions.” Source : Nature
  7. selesn777

    PGWARE SuperRam 6.7.14.2014

    PGWARE SuperRam 6.7.14.2014 PGWare SuperRam - this utility will significantly increase the efficiency of your system performance by freeing memory. The program runs in the background, has a very large number of settings. By passing it, has an intuitive and user friendly interface so that even a beginning user will not have problems when using this program. The program increases computer performance by freeing memory. The program can run in the background, has a lot of settings. In addition, an intuitive interface, so even a beginning user will not have problems when using this program. Key features include: Optimizes computer memory by her release.Works in the background to free memory to maintain system stability.Visual representation in system tray which displays computer memory available in the system.You can choose the settings for their purposes.Easy and intuitive user interface which performs memory optimization in real-time.Changes in SuperRam 6.7.14.2014: Updated to the latest version of Innosetup to improve and fix any bugs during the install process.Homepage: http://www.pgware.com/ OS: Windows 8, 7, Vista, XP & Windows Server 2012, 2008, 2003 (x86-x64) Language: Ml Medicine: Keygen Size: 5,33 Mb.
  8. IBM has demonstrated a new type of memory technology that the company believes could one day be a replacement for NAND flash. The company’s Theseus Project (conducted in cooperation with the University of Patras in Greece) is the first attempt to combine phase change memory, conventional NAND, and DRAM on a single controller. The result? A hybridized storage solution that outperforms PCIe-based SSDs by between 12 and 275 times. The physics of phase change Phase change memory is one of a number of alternative memory structures that’s beenproposed as a replacement for NAND. Phase change memory works by rapidly heating chalcogenide glass, shifting it between its crystalline and amorphous state. In its amorphous state (read as a binary 0), the structure has very high resistance, while in its crystalline state (binary 1) resistance is quite low. Phase change memory can quickly shift between the two states, plus research from Intel and Micron have demonstrated the feasibility of intermediate states, which allows two bits of information to be stored per cell. Phase change memory has much lower latency than NAND, much faster read/write times (in theory), and it can withstand millions of write cycles as compared to 30,000 with high-end SLC NAND and as few as 1,000 with TLC NAND. Even better, it’s well positioned compared to other theoretical memory devices. Even so, NAND flash has enormous economies of scale and billions invested in fab plants across the world. What IBM has done with Theseus is to incorporate a small amount of PCM into a hybrid structure where its ultra-low-latency characteristics can be effectively leveraged. This chart shows the various areas where IBM believes phase change memory could be useful. Note that in many cases, the PCM is being integrated either as a cache solution or as an additional tier of storage between NAND and DRAM, just as NAND is often integrated between DRAM and a conventional hard drive. Project Theseus is an aggregate controller featuring what appears to be 2.8GB of PCM (36 128Mbit cells per card, 5 cards total). IBM calls this its PSS (Prototype Storage Solution). The advantages of PCM are illustrated in the slides above. These graphs show the total latency for various types of requests. Note that the PSS solution (that’s the PCM card) completes the overwhelming majority of its requests in under 500 microseconds. The two MLC solutions top out at 14,000 and 20,000 microseconds compared to 2,000 microseconds for the PSS, while the TLC NAND is an order of magnitude slower, topping out at 120,000 microseconds. In short, these early PCMs, built on 90nm CMOS and at extremely low density (modern NAND flash is now available in 512Gbit sizes compared to 128Mbit for PCM) is a full order of magnitude faster than commercial NAND, with vastly superior write performance and data longevity. There’s just one little problem IBM makes a point of noting that its PSS solution uses 90nm memory produced by Micron. The only problem? Micron gave notice earlier this year that it was cancelling all of its PCM production and pulling out of the industry. While it left open the door to revisiting the memory tech at some point in the future, it indicated that the superior scaling of 3D NAND was a better option (despite the numerous problems identified with that technology in the short term). Where does this leave PCM? The 2013 ITRS report notes that NAND performance isn’t actually expected to increase much from present levels — in fact, it’s going to be difficult to maintain current NAND performance while improving density and holding write endurance constant. Right now, PCM is the most promising next-generation memory technology on the market — but if no one steps forward to manufacture it, it’s going to be a tough sell. Source
  9. Samsung has been working hard this holiday season and today announced its 8Gb (that's 1GB) LPDDR4 mobile DRAM chip. This is not only the industry first LPDDR4 mobile DRAM, but it's also the first mobile DRAM to fit such a high density (8 gigabits) on a single die. Four of those dies can successfully be combined for a total of 4GB of mobile memory - an amount that so far has not been achieved commercially. Fabricated on the 20nm manufacturing process, the new mobile LPDDR4 chips are reportedly 50% faster than the fastest LPDDR3 memory on the market, while also consuming 40% less power at 1.1 volts. "This next-generation LPDDR4 DRAM will contribute significantly to faster growth of the global mobile DRAM market, which will soon comprise the largest share of the entire DRAM market " - Young-Hyun Jun, executive vice president, memory sales & marketing, Samsung Electronics. The data transfer speeds the new 8Gb LPDDR4 DRAM chip can achieve are 3,200Mbps per pin - twice the speed of currently produced 20nm LPDDR3 DRAM. Samsung also hints that the resulting 4GB RAM modules will be included in "UHD smartphones, tablets and ultra-slim notebooks," which says something about the jump in smartphone screen resolution during the next year. Perhaps even the upcoming Galaxy S5 may be one of those UHD smartphones. Mind you, to utilize that kind of memory, the Galaxy S5 will need a 64-bit CPU architecture. Good thing Samsung promises it's 64-bit SoC will be ready in time for the phone's announcement (usually in the first half of the year). Source
  10. Hi Guyz, Just Got PES 2014. Am having issues with play this game because of this Any idea how I can increase it?
  11. hitminion

    Rate This Laptop

    So what do you think of this laptop: Aspire V3-571G - Intel Core i7 - 3632QM 2.2GHZ With Turbo Boost Up To 3.2GHz - NVIDIA GeForce 710M With 2GB Dedicated VRAM - 15.6" HD LED LCD - 6GB DDR3 Memory - 750GB HDD - DVD-Super Multi DL Drive - Acer Nplify 802.11a/g/n + BT 4.0 - 6-Cell Li-ion Battery And what operating system would you recommend ?
  12. Hi Nsane community :hi: , I have MSI gaming notebook with 8 GB of Nanya RAM (4 * 2) .After about eight month of purchasing date , I got a problem with one of the RAM #1 which cause BSOD (MEMORY MANAGEMENT) .I did memory diagnostic on Windows 7 and the result was "Hardware problem was detected .." . Why this happen ? I did't make any thing wrong with my laptop ,(I mean I really care about its temperature and keep it clean from dust ..) By the way The laptop is still working well (running games , programs ,NET ) except sometimes I get the mentioned BSOD. :please: Is there any one knows about or faced like that problem ? And what do you advise me to do about my sick ram ? :( Solved : I brought my laptop to MSI service center to fix it .. Staff there told me that both of RAMs have a problem .They replaced RAMs with new same brand RAM (NANYA) .When I asked them about what was the problem ,they just told me that "Unexpected error occurred !" The important thing my laptop is now working like charm without any annoying BSOD :D
×
×
  • Create New...