CY8C36_10 CYPRESS [Cypress Semiconductor], CY8C36_10 Datasheet - Page 16

no-image

CY8C36_10

Manufacturer Part Number
CY8C36_10
Description
Manufacturer
CYPRESS [Cypress Semiconductor]
Datasheet
4.4.3 Priority Levels
The CPU always has higher priority than the DMA controller
when their accesses require the same bus resources. Due to the
system architecture, the CPU can never starve the DMA. DMA
channels of higher priority (lower priority number) may interrupt
current DMA transfers. In the case of an interrupt, the current
transfer is allowed to complete its current transaction. To ensure
latency limits when multiple DMA accesses are requested
simultaneously, a fairness algorithm guarantees an interleaved
minimum percentage of bus bandwidth for priority levels 2
through 7. Priority levels 0 and 1 do not take part in the fairness
algorithm and may use 100% of the bus bandwidth. If a tie occurs
on two DMA requests of the same priority level, a simple round
robin method is used to evenly share the allocated bandwidth.
The round robin allocation can be disabled for each DMA
channel, allowing it to always be at the head of the line. Priority
levels 2 to 7 are guaranteed the minimum bus bandwidth shown
in
satisfied their requirements.
Table 4-7. Priority Levels
When the fairness algorithm is disabled, DMA access is granted
based solely on the priority level; no bus bandwidth guarantees
are made.
4.4.4 Transaction Modes Supported
The flexible configuration of each DMA channel and the ability to
chain multiple channels allow the creation of both simple and
complex use cases. General use cases include, but are not
limited to:
4.4.4.1 Simple DMA
In a simple DMA case, a single TD transfers data between a
source and sink (peripherals or memory location).
4.4.4.2 Auto Repeat DMA
Auto repeat DMA is typically used when a static pattern is
repetitively read from system memory and written to a peripheral.
This is done with a single TD that chains to itself.
Document Number: 001-53413 Rev. *I
Table 4-7
Priority Level
0
1
2
3
4
5
6
7
after the CPU and DMA priority levels 0 and 1 have
% Bus Bandwidth
100.0
100.0
50.0
25.0
12.5
6.2
3.1
1.5
PRELIMINARY
4.4.4.3 Ping Pong DMA
A ping pong DMA case uses double buffering to allow one buffer
to be filled by one client while another client is consuming the
data previously received in the other buffer. In its simplest form,
this is done by chaining two TDs together so that each TD calls
the opposite TD when complete.
4.4.4.4 Circular DMA
Circular DMA is similar to ping pong DMA except it contains more
than two buffers. In this case there are multiple TDs; after the last
TD is complete it chains back to the first TD.
4.4.4.5 Scatter Gather DMA
In the case of scatter gather DMA, there are multiple
noncontiguous sources or destinations that are required to
effectively carry out an overall DMA transaction. For example, a
packet may need to be transmitted off of the device and the
packet elements, including the header, payload, and trailer, exist
in various noncontiguous locations in memory. Scatter gather
DMA allows the segments to be concatenated together by using
multiple TDs in a chain. The chain gathers the data from the
multiple locations. A similar concept applies for the reception of
data onto the device. Certain parts of the received data may need
to be scattered to various locations in memory for software
processing convenience. Each TD in the chain specifies the
location for each discrete element in the chain.
4.4.4.6 Packet Queuing DMA
Packet queuing DMA is similar to scatter gather DMA but
specifically refers to packet protocols. With these protocols,
there may be separate configuration, data, and status phases
associated with sending or receiving a packet.
For instance, to transmit a packet, a memory mapped
configuration register can be written inside a peripheral,
specifying the overall length of the ensuing data phase. The CPU
can set up this configuration information anywhere in system
memory and copy it with a simple TD to the peripheral. After the
configuration phase, a data phase TD (or a series of data phase
TDs) can begin (potentially using scatter gather). When the data
phase TD(s) finish, a status phase TD can be invoked that reads
some memory mapped status information from the peripheral
and copies it to a location in system memory specified by the
CPU for later inspection. Multiple sets of configuration, data, and
status phase “subchains” can be strung together to create larger
chains that transmit multiple packets in this way. A similar
concept exists in the opposite direction to receive the packets.
4.4.4.7 Nested DMA
One TD may modify another TD, as the TD configuration space
is memory mapped similar to any other peripheral. For example,
a first TD loads a second TD’s configuration and then calls the
second TD. The second TD moves data as required by the
application. When complete, the second TD calls the first TD,
which again updates the second TD’s configuration. This
process repeats as often as necessary.
PSoC
®
3: CY8C36 Family Datasheet
Page 16 of 112
[+] Feedback

Related parts for CY8C36_10