During the development of a block communicating with an external chip, a behavioral model of this device can be very helpful. Sometimes such model can be found online, but often we need to create it ourselves. One type of external chip frequently connected to the FPGA is some kind of volatile or non-volatile memory. Unfortunately, in VHDL it is often implemented not optimally.
A seemingly easy memory model implementation
The memory model implementation seems pretty straightforward: just create an interface, add an array of given size to act like a memory cells (store data), some read and write functionality and additional features that the actual chip has. Apart from the fact that such memory module doesn’t need to be synthesizable, it can be implemented just like any other VHDL design block.
When searching the web for an example of VHDL memory model, we can find many examples such as:
- The one from Sabanci University
- Or this one from Doulos
- This one from Auburn University
- And this one from GetMyUni
- And many more
All of them have a similar structure:
entity memory is
port (
~ some interface ~
);
end entity memory;
architecture arch of memory is
type mem_type is array(~ some memory depth ~) of std_logic_vector(~ some memory width ~);
signal mem : mem_type;
begin
~ some read and write process(es) ~
end architecture arch;
And these generally work correctly and can be used as an example. However, there is a catch. These examples are not optimal. This means, until you use them for a relatively small size memories (from my experience: up to a couple thousands memory words, so address width in low teens), they are fine. The problem appears when you need a memory model with bigger array – let’s say several Megabits or larger. Once you try to simulate such model, you may encounter an error regarding memory allocation (this time it refers to a physical memory in your computer).
What is the cause of the problem?
In short, this is caused by implementing a memory array as signal. Why?
VHDL (same as Verilog) simulators use Discrete Event Simulation (DES) technique – here you can find a general description of DES. Generally, during simulation each signal assignment creates and event. Each event contains at least a type of event (a signal transition from state A to state B) and a timestamp when the event will occur.
Why does it matter in this case? We can see that a signal simulation allocates more memory (physical memory in your computer) than it seems by the signal declaration. In case of a bigger memory size model simulation this can become a real problem. Fortunately, there is a way to fix this.
Memory array model in VHDL – the optimal way
In order to optimally implement a memory model in VHDL, we need to eliminate signal from the array declaration. This can be done using variable. This way we can reduce the required memory (physical memory in your computer) for array simulation. This is because how variable work in VHDL, when compared to signals.
In VHDL a variable value is assigned immediately (with no delay). Unlike signal, whose value is assigned in the next event (most often during the next clock edge). Therefore, a variable assignment does not require an event. So in total, a variable allocates noticeably less memory (physical memory in your computer) then a signal.
But there is a catch. By default, a variable is a local object, that’s accessible only in a single process block. So it may require a slightly different implementation:
entity memory is
port (
~ some interface ~
);
end entity memory;
architecture arch of memory is
type mem_type is array(~ some memory depth ~) of std_logic_vector(~ some memory width ~);
begin
process(~ some sensitivity list ~)
variable mem : ram_type;
begin
...
mem(addr) := data_in;
...
data_out := mem(addr);
end process;
~ other process blocks ~
end architecture arch;
All read and write to memory array operations must be implemented in a single process block. Which may not be a bit deal. Also, a multi-port memory memory implementation is a bit different and definitely less clear. What’s even worse, it is more difficult to access the memory contents externally – in a testbench or simply to view it on a simulator’s waveform. Fortunately, this can also be fixed.
In VHDL 1993 version, a shared variable was introduced. It allows us to define a variable that can be accessed in multiple different process blocks (just like signals), while still behaving like a variable (immediate assignment, so less physical memory allocation). It can be used as such:
entity memory is
port (
~ some interface ~
);
end entity memory;
architecture arch of memory is
type mem_type is array(~ some memory depth ~) of std_logic_vector(~ some memory width ~);
shared variable ram : ram_type;
begin
process(~ some sensitivity list ~)
begin
...
mem(addr) := data_in;
...
data_out := mem(addr);
end process;
~ other process blocks ~
end architecture arch;
The VHDL 1993 standard is supported by the majority of HDL development tools, so most cases it should be possible to use shared variable to model a memory array.
There are several gains to the shared variable approach:
- significantly less physical memory allocation (compared to signal)
- reduced simulation times
- easy transition from the signal implementation to shared variable implementation
- similar behavior to signal implementation (multiple process blocks can be used)
- similar access to memory array as in signal implementation
Generally there are no real disadvantages. The only two far-fetched ones, that I can think of, are:
- VHDL 1993 requirement – while the newer VHDL versions (like VHDL 2008 or especially VHDL 2019) may not be acceptable for a given project, the VHDL 1993 is probably the most widely used version
- the immediate value assignment may cause a different behavior – but generally the simultaneous read and write operations to the same memory location should be avoided in a real application.
References:
2 thoughts on “The optimal way of creating memory model in VHDL”
Comments are closed.