IC Framework Technology
Functional Specification
Compressed OpenAccess Data
Document Revision: 1.2
Document Date: 03-15-2011
Template Version 2.5
© 2011 Cadence Design Systems, Inc.
All rights reserved worldwide.
TABLE OF CONTENTS
1. INTRODUCTION .....................................................................................................................................2
2. FUNCTIONAL SPECIFICATION .................................................................................................................3
2.1 Support Compressed Data in an OpenAccess Database ...............................................................3
Description ................................................................................................................. 4
Architectural Approach............................................................................................... 5
Dependencies ............................................................................................................ 5
Public Interfaces......................................................................................................... 5
2.2 Define a Control for Managing Compressed Databases .................................................................5
Description ................................................................................................................. 5
Data Flow ................................................................................................................... 6
Public Interfaces......................................................................................................... 6
Compatibility............................................................................................................... 7
2.3 Determine Performance versus Compression Tradeoffs ................................................................7
Description ................................................................................................................. 7
2.4 Support Utility to Compress/Decompress Databases .....................................................................9
Description ................................................................................................................. 9
Dependencies .......................................................................................................... 11
Public Interfaces....................................................................................................... 11
2.5 Data Compatibility..........................................................................................................................11
Description ............................................................................................................... 11
Translators ............................................................................................................... 12
2.6 Translator Support of Compressed Databases .............................................................................12
Description ............................................................................................................... 12
Usage ....................................................................................................................... 13
2.7 Packaging .....................................................................................................................................13
Description ............................................................................................................... 13
Dependencies .......................................................................................................... 13
3. TECHNICAL RISKS AND DEPENDENCIES ...............................................................................................14
3.1 Technical Risks..............................................................................................................................14
3.2 Technical Dependencies ...............................................................................................................14
© 2011 CADENCE DESIGN SYSTEMS, INC. ALL RIGHTS RESERVED WORLDWIDE.
UPDATE DATE: 14-SEP-11
PAGE 1 OF 16
1.
Introduction
The goal of the project is to provide a user the ability to specify that when OpenAccess databases are written to
disk, they are to be written in a form more compressed than how they are written today. The compression
should be compliant with RFC 1951 which is supported in ZLIB. The reduced size of an OpenAccess database
will reduce user disk space requirements and will improve the performance when accessing the data over widearea networks.
The size of an OpenAccess database file has come under scrutiny of late. An OpenAccess database today is 2
to 7 times larger than an equivalent proprietary database from other EDA tools.
In addition, reading a large OpenAccess database over a global wide-area network can be slow. Experiments
have shown that compressing an OpenAccess database, copying it over the network, and then uncompressing
it was two times faster than simply copying the original database.
In the OpenAccess 22.41 release, the auto-defragmentation of string and name data was introduced and can
help manage the database file size to some degree. This effort is not enough to satisfy the demands of
applications creating very large design databases. Hence, OpenAccess 22.42 will introduce writing data in a
compressed form which will further reduce the database file size when the need arises. There are performance
considerations which are discussed in the requirements as well as in this document.
REQUIREMENTS REFERENCE
Section 2.1: Enable writing and reading of compressed OA data files
Section 2.2: Support partial read
Section 2.3: Enable control of writing compressed data
Section 2.4: Stand-alone compress/uncompress utility
Section 2.5: Performance and file size tradeoffs
Section 2.6: Data Compatibility
Section 2.7: Packaging Requirements
Section 2.8: Requirements on Translators
REFERENCES
OpenAccess 2.2 Requirements Specification: Compressed OpenAccess Data, Michaela Guiney, Feb 2011.
RFC1951 – DEFLATE Compressed Data Format Specification, http://www.faqs.org/rfcs/rfc1951.html, L.
Peter Deutsch, May 1996.
zlib - A Massively Spiffy Yet Delicately Unobtrusive Compression Library, http://www.zlib.net/, Current
Release 1.2.5, Apr 2010.
zlib – Wikipedia, http://en.wikipedia.org/wiki/Zlib
SI P39-0308 OASIS® - Open Artwork System Interchange Standard, SEMI 2004, 2008.
2.
Functional Specification
High performance applications and some customers have requested smaller OpenAccess database files in
order to reduce disk space requirements and to improve load times across wide area networks. Some users
explicitly employ the gzip utility to pack and unpack OpenAccess databases to and from smaller footprint files.
However, the data management of these database files as they change names (e.g., from layout.oa to
layout.oa.gz and back) can become cumbersome for the user.
OpenAccess already supports a kind of implicit data compression. There is the defragmentation of freed space
that may occur when a database is read. In OpenAccess 22.41, additional functionality was added with the
ability to reclaim the space from unused strings and names in a database. Neither of these is adequate to
address the database file size requirements put forth for adoption by some high performance applications.
The goal of this project is to define what it means to support “compressed internal data” in OpenAccess
databases, how the functionality is exposed to users, and what the implications are for data file compatibility
between applications using different versions of OpenAccess shared libraries. It is important that the
functionality not introduce a data model change and is rationalized as such since there are no new objects and
no changes to existing objects that warrant such a data model change.
FLOW
The following picture depicts how a design library may be transformed in a flow of data from an application
based on 22.42 that saves compressed OpenAccess databases into a form usable by older applications
based on 22.41 and back again.
As noted in the requirements, when the contents of the designLib are in compressed form, they are expected
to take up 2 to 3 times less space than the uncompressed form of the library. This designLib can be read by
any application using a version of OpenAccess that can read the compressed form of the databases.
However, to make the data readable by older software using older versions of OpenAccess, the user will
have to uncompress the data as shown below.
Once the databases have been transformed to their uncompressed state, the data can be used with
applications that use older versions of OpenAccess. If necessary, the data in the designLib could be recompressed in order for it to be shared with users desiring a smaller footprint of the library.
2.1 Support Compressed Data in an OpenAccess Database
This functionality satisfies the requirements described in Sections 2.1, 2.2, and 2.7 of the Requirements
Specification.
DESCRIPTION
OpenAccess will be enhanced with the ability to write databases where portions of the data are written to
disk in compressed form. The functionality is similar to how OASIS supports compressed blocks of data in
its CBLOCK record:
“A CBLOCK record provides a mechanism for embedding compressed data within the structure of an
OASIS file for additional compactness.”
The compression will be compliant with RFC 1951 which is similar to what the gzip utility utilizes and what
is supported programmatically by zlib. Zlib describes itself as follows:
“The zlib compression library provides in-memory compression and decompression functions,
including integrity checks of the uncompressed data. This version of the library supports only one
compression method (deflation) but other algorithms will be added later and will have the same stream
interface.
Compression can be done in a single step if the buffers are large enough (for example if an input file is
mmap'ed), or can be done by repeated calls of the compression function. In the latter case, the
application must provide more input and/or consume the output (providing more output space) before
each call.
The compressed data format used by default by the in-memory functions is the zlib format, which is a
zlib wrapper documented in RFC 1950, wrapped around a deflate stream, which is itself documented
in RFC 1951.”
The granularity of the portions of data that are compressed when written (also known as deflated) and
uncompressed when read must be small enough so that the existing partial read behavior is still
supported yet provide an adequate reduction in file size. For example, some known “chunks” of data are
described here.
•
String table – this table contains all of the strings (e.g., oaString) referenced within a given
database.
•
Name table – this table contains all of the names (i.e., oaName) referenced within a given
database.
•
Data table – a data table stores all of the information for a given type of data; for example, the
properties, shapes, and nets information are stored in data tables.
Ideally, there should be a fast, programmatic way to query whether a database is compressed or not. For
example, for an oaDesign, there should be a way to make a const query of the design if its on-disk data is
written in compressed form. This query may read a small-design completely into memory but ideally, for
large designs, the query would partial-read only a small part of the database (maybe just the header) in
order to make the determination. Such an API would be used by a utility reporting the compression state
of the designs in an OpenAccess library.
Zlib supports different levels of compression. The gzip utility allows the specification of 8 different levels
from 1 through 9. Level 1 is known as the “fast” compression where the speed of compression if favored
over how much the data is compressed and level 9 is known as the “best” compression. The default for
gzip is level 6. A design issue is determining what level is appropriate for the compression of the
OpenAccess data. At the time of this writing, experiments using gzip show some smaller databases
where “fast” compresses a database as-good-as or better than “best” but one large customer test case
compressed 2.9x using “best” compared to 2.8x using “fast”. The question is whether “fast” is good
enough (opting for performance over compression) would some higher level provide a better compression
at slightly slower performance. There is more discussion about performance versus compression in
Section 2.3.
There is no requirement nor are there plans to support the writing of compressed OpenAccess parasitic
databases.
ARCHITECTURAL APPROACH
Because the support is at such a low-level, it is worth mentioning some of the components involved that
will be enhanced. OpenAccess employs a memory map window that is independent of the underlying
backing storage (which may be a regular file, a shared memory segment, or the OS pagefile). A map file
window object associates a map window with a map file. OpenAccess also employs a map buffer window
object that associates a map window to a map buffer as the backing storage.
One idea is to enhance the map buffer window or introduce a map zlib buffer window that compresses
map buffer chunks before they are written to the associated map file. For a given data table, OpenAccess
calculates an estimate disk space size for the data and that information could be used to compare against
a threshold or ratio in order to determine whether to compress the corresponding data.
DEPENDENCIES
This work introduces a dependency on 3rd party functionality, specifically a version of ZLIB that is
compatible with RFC 1951. There an issue of using a version of zlib that may conflict with another version
of zlib that an application is using.
The proposal to circumvent these issues is to create an OpenAccess-specific, static variant of zlib that is
scoped to an OpenAccess namespace. This code would be built as part of the reference implementation
and contributed to Si2. To minimize the impact on applications linking to OpenAccess, the zlib
functionality could be encapsulated and supplied in an existing library archive like liboaBase.so.
PUBLIC INTERFACES
The support for compressing internal OpenAccess data will be implicit. Section 2.2 describes the public
access control of the capability which will be at the OpenAccess library level. The OpenAccess API minor
revision should be incremented to indicate that there is a version of the OpenAccess API that supports a
new behavior.
2.2 Define a Control for Managing Compressed Databases
This functionality satisfies the requirements described in Section 2.3 of the Requirements Specification.
DESCRIPTION
OpenAccess will define a persistent way for a user to specify whether OpenAccess writes databases with
internally compressed data. This control will be at the library level and be compatible across different
library plug-ins.
There is a desire to support the ability for a user to specify different levels of compression. OpenAccess
could also expose the compression level to the user by making the value of the compression control an
integer value that implies the compression level to use. The allowed values for the compression level
would be 0, 1, 2, ..., 9, where 1 maps to the fastest zlib compression, 9 maps to the best zlib
compression, and 0 denotes that no compression is used (databases are written in uncompressed form).
The absence of the attribute also means that databases will be written in uncompressed form. It may be
easiest if only the absence of the attribute is used to signify that the OpenAccess data in the library is
uncompressed.
The gzip default compression level is 6. Experimentation with different compression levels will be
performed to select a meaningful default compression level that OpenAccess will use. For example, a
compression level of 1 may provide adequate compression with the benefit of being the fastest
performing. The compression of binary data like that in an OpenAccess database will yield different
compression results than that when compressing a text file; a compression level of 6 may provide a good
performance/compression balance for text files but a lower compression value may provide a better
balance for binary data.
An oaIntAppDef stored in the library level DMData database will be used to record the compression
control setting. The absence of the AppDef or a value of zero will signify that the library should not contain
compressed OpenAccess databases. A value between 1 and 9 will indicate that OpenAccess design
databases will be saved in compressed form using the specified compression level. A default value will be
determined early in the implementation phase. New API functions will be defined on the oaLib object for
querying, setting, and clearing the compression control value. Using an AppDef allows the information to
be preserved by older versions of OpenAccess and is DM plug-in independent. It may be desirable to
document this AppDef and that determination will be made early in the implementation phase.
DATA FLOW
Ideally, any application that creates a library will be enhanced with a way that allows the user to specify
whether OpenAccess databases in the library are saved in compressed form. Some applications are
called out in the requirements specification and noted later in this document.
The library compression control will be referenced whenever a design database is saved. It’s not intended
as a dynamic control in the sense that when it is changed, all of the databases in the library will be
synchronized to the setting of the attribute. Setting the attribute will not affect any of the existing
databases in the library. The setting is applied to save actions taken after it has been set. This also
means if the compression level is changed, the new level is used for subsequent save actions. The
compression utility described elsewhere in this document can be used to synchronize the databases in a
library to match its compression setting. Once a database is compressed, it is considered compressed
regardless of the compression level used to compress its data.
PUBLIC INTERFACES
New OpenAccess API functions will be added to the oaLib object to query and set the compression level
control. They may look like the following.
#define oacDefaultCompressionLevel 1 // ***************************************************************************** // oaLib // ***************************************************************************** class OA_DM_DLL_API oaLib : public oaDMContainer { public: ... oaBoolean isDataCompressed() const; oaUInt4 getDataCompressionLevel() const; void setDataCompressionLevel(oaUInt4 level = oacDefaultCompressionLevel); void unsetDataCompression(); ... The isDataCompressed query reflects the attribute of the library and does not reflect the state of any
given database in the library.
The getDataCompressionLevel query will return a value between 1 and 9 if compression has been
previously enabled using the setDataCompressionLevel function. If the compression control was not
set, this function will throw an exception.
The setDataCompressionLevel function will enable database compression. If the level argument isn’t
specified, a default value is used (1 is used as an example). If the level argument is specified, it must be a
value between 1 and 9; an exception is thrown if any other value is specified. The function will update any
existing compression control level to the new level, but will not update the compression state of the
databases in the library.
The unsetDataCompression function will clear the compression control of the library. It will not update
the compression state of any databases in the library.
COMPATIBILITY
The OpenAccess minor API revision will be incremented to signify the addition of new API functions and
enumerated type.
2.3 Determine Performance versus Compression Tradeoffs
This functionality satisfies the requirements described in Section 2.5 of the Requirements Specification.
DESCRIPTION
As early as possible, quantitative analysis will be performed to characterize compression ratios, ideally for
different types of designs. A measure of the read time of uncompressed data will be compared against
the read time of the compressed form of the data and a measure of the write time of uncompressed data
will be compared against the write time of the compressed form of the data.
Ideally, these read and write time measurements should be taken on a local area network (LAN or local
disk) and again across a wide area network (WAN). The WAN read and write times are of an interest to
users who distribute their data globally.
To get a better feel for the OpenAccess read and write performance versus compression tradeoffs, a
prototype program will be written to emulate what OpenAccess will be doing in terms of reading and
writing compressed and uncompressed data from and to a file. The program has to provide performance
numbers for the following tasks:
•
Write large chunks of data to disk (emulate saving an OA DB)
•
Write large chunks of data to a compressed form on disk (emulate saving compressed OA data)
•
Read large chunks of data from a file (emulate reading an uncompressed OA DB)
•
Read large chunks of data in compressed form from disk and uncompress them (emulate reading
and uncompressing OA data)
The above tasks will be performed to compare the performance and compression of all 9 levels of
compression supported by zlib. The runs will have to be done reading and writing to a file on a local disk
and to a file across a WAN in order to gauge if writing and reading compressed data (which includes the
compute overhead of the compression) is on-par with the time to write and read uncompressed data.
As an early attempt at emulating the flow, some of the above tasks were simulated on UNIX by piping a
stream of bytes through the gzip utility. Four different types of databases were used in the experiment.
•
DB1 is a database comprised entirely of 3 million rectangles spread across 15 layers. The idea
here was to see how compression worked across a pattern of regular shapes.
•
DB2 is a database with 400K scalar instTerms spread across 4 instances. The intent here was to
simulate a single-bit layout database.
•
DB3 is a database with 402080 multi-bit and single-bit instTerms. The intent was to simulate a
large schematic database with multi-bit connectivity. The database contains a large number of
strings and names.
•
DB4 is a representative large customer design.
The following table shows the original database sizes and the resulting sizes using three different levels
of compression (fast or gzip -1, best or gzip -9, and the default which is gzip -6). The times shown reflect
how long it took gzip to process the file (reading the uncompressed file and writing the compressed
version).
Original
DB1
DB2
DB3
DB4
gzip --fast
shrink
time
gzip --best
shrink
time
gzip
shrink
time
570011652 114285221 79.95% 22.5s 129206064 77.33% 4:10.8 124159096 78.22% 1:13.3 21994988 5130750 76.67% 2.9s 5171237 76.49% 7.0s 5170959 76.49% 3.6s 24373324 5478091 77.52% 1.8s 5497817 77.44% 7.3s 5497405 77.44% 3.7s 351098860 123252510 64.90% 21.2s 118196320 66.34% 1:20.5 118375741 66.28% 47.3s The following table shows the time it takes to run gzip –d on the compressed version of the file (reading
the compressed file and writing the uncompressed version).
gzip -d
DB1
DB2
DB3
DB4
from --fast
11.6s 1.5s 1.0s 8.7s from --best
24.3s 1.1s 1.4s 12.05s from default
11.9s 1.9s 0.8s 14.3s These tables show that there can be a big difference in run time between the fast and best compression
levels. It also shows that, at least for OpenAccess database files, the differences in the compression
results between the levels aren’t very much.
We can also try to emulate and compare the write and read times for zipped versus raw data. For
example, the following two UNIX commands can be used to emulate and measure the time it takes to
write data to disk.
cat srcfile | /usr/bin/time –output=time.out cat > dstfile cat srcfile | /usr/bin/time –output=time.out gzip –c ‐<level> > dstfile.gz The first command emulates writing data to a raw file to a destination file. The second command
emulates writing data in compressed form to a file. A similar set of commands can be used to emulate
and time the read operation.
cat srcfile | /usr/bin/time –output=time.out cat > /dev/null cat srcfile.gz | /usr/bin/time –output=time.out gzip –cd > /dev/null The following tables show the comparison between writing the raw form of a database file and writing the
fast, best, and default compressed forms of the file. These tables reflect these operations being done
where the source and destination files are on the same local disk or local area network (LAN).
LAN write
DB1
DB2
DB3
DB4
LAN read
DB1
DB2
DB3
DB4
raw
gzip --fast
10.1s 0.4s 0.5s 9.5s raw
19.1s 0.9s 1.1s 15.6s from fast
5.4s 0.2s 0.3s 3.7s gzip --best
from best
7.2s 0.3s 0.4s 5.4s gzip
4:13.4 1:16.2 5.0s 3.7s 5.1s 5.5s 1:23.8 49.1s from default
6.7s 0.3s 0.4s 5.9s 6.3s 0.3s 0.6s 5.7s Compressing and writing a file can be 2 to 8 times slower than writing the original file which is expected
because the different compression levels use different complexity algorithms to perform the compression.
As expected, the read times are much faster than the write times and the difference between reading the
raw form of a file versus decompressing and reading a compressed form of a file is small.
The above exercise gives us an idea of the impact on write and read times for an application user who
normally would have their data locally available. To emulate what the environment for a user doing reads
and writes across a WAN, the above exercise can be modified by having the destination files in the write
exercise be to a location somewhere on a wide-area network (WAN), and having the source files in the
read exercise to be from a location on a WAN. The following two tables show numbers using an extreme
form of a WAN where the network bandwidth is limited by a VPN tunnel across a DSL network access.
WAN write
DB1
DB2
DB3
DB4
WAN read
DB1
DB2
DB3
DB4
raw
gzip --fast
gzip --best
gzip
2:21:44.5 0:05:55.0 0:05:17.7 1:26:27.5 0:28:54.4 0:01:22.5 0:01:15.6 0:30:16.9 0:32:06.5 0:01:23.9 0:01:19.0 0:29:48.8 0:31:04.3 0:01:25.8 0:01:21.7 0:29:49.3 raw
from fast
from best
0:23:17.0 0:01:22.8 0:00:46.0 0:17:36.2 0:04:54.2 0:00:13.3 0:00:12.3 0:07:00.0 0:04:48.7 0:00:10.5 0:00:15.0 0:05:57.2 from default
0:05:07.6 0:00:11.1 0:00:12.3 0:05:19.4 The extreme WAN example shows that the size of the data being moved across the network affects the
write and read times in a big way.
2.4 Support Utility to Compress/Decompress Databases
This functionality satisfies the requirements described in Section 2.4 of the Requirements Specification.
DESCRIPTION
The user must be given the ability to do the following:
•
Process the OpenAccess databases in a library, compress them as needed, and update the
compression control value of the library.
•
Process the OpenAccess databases in a library, uncompress the ones that are in compressed
form, and update or reset the compression control value of the library.
•
Report what the value is of the compression control attribute of a library; this would report whether
OpenAccess databases in the library would be saved in compressed form or not.
•
Scan the OpenAccess databases in a library and report on the ones that don’t match the
compression control attribute of the library.
•
Scan the OpenAccess databases in a library and update any databases that didn’t match the
compression control attribute of the library.
OpenAccess will supply a single, stand-alone executable that supports the above capabilities. The
desired action or actions taken are specified using command line options. Pictures in the functional
design review slides and in this document have suggested the name “oazip” for the name of the utility;
other suggestions are “oacompress” or “oasquishDB”.
The default behavior of the utility would be to compress the OpenAccess databases in a library as
needed and update the compression control library attribute accordingly. The utility will not change any of
the internal database timestamps in any of the database files.
The following command line switches are suggested:
•
‐lib <name>
This required option specifies the name of the library to process. It is an error if this option is not
specified.
•
‐compress
When this option is specified, the utility will process the OpenAccess databases in a library,
compress the ones in uncompressed form, and update the compression control value of the library
accordingly. The default compression level will be used.
•
‐compressLevel <level> When this option is specified, the utility will process the OpenAccess databases in a library,
compress the ones in uncompressed form, and update the compression control value of the library
accordingly. The specified compression level will be used and it must be an integer value in the
range 1-9.
•
‐decompress
When this option is specified, the utility will process the OpenAccess databases in a library,
uncompress the ones in compressed form, and reset the compression control value of the library.
•
‐query
When this option is specified, the utility will report whether the compression control is specified for
the library and if so, at what level it is set to.
•
‐check
When this option is specified, the utility will report those OpenAccess databases in the library that
are inconsistent with the compression control setting of the library. If there is no compression
control specified, the utility will list the databases that are in compressed form. If the compression
control is specified, the utility will list those databases that are either in uncompressed form or
were written using a compression level different than what the compression control is set to.
•
‐update
When this option is specified, the utility will process the OpenAccess databases in a library and
update the ones that are inconsistent with the compression control value of the library.
•
‐h
This option will print a usage message.
Note that some combinations of options would be ambiguous and will be flagged as an error. For
example, it does not make sense to specify ‐compress with any other option or to specify ‐uncompress
with any other option. It would not make sense to specify ‐check and ‐update together. However, it
would be reasonable to specify –query and –check together or ‐query and ‐update together.
When no options are specified, the utility will issue an error message stating that the required number of
arguments weren’t specified. A usage message may be printed as well but the behavior should be
consistent with that of other OpenAccess executables like the language translators.
If only the ‐lib option is given, the utility by default will compress the databases in the specified library
using the default compression level.
Although the utility is intended to process all the databases in a library, there may be a desire at some
later time to be able to process the databases under a given cell or process the databases in a specified
cellView. To support the specification of a cell name or cellView, additional arguments such as ‐cell and
–view would be added and used in combination.
Ideally, the utility should acquire an exclusive lock on the library before processing it. However, this is not
supported by oaDM. Therefore, the utility should handle the following error conditions.
•
When compressing, decompressing, or synchronizing OpenAccess databases in a library, the
utility should issue an appropriate error when a database file is locked by another user or isn’t
writable.
•
When checking the databases in a library, whenever the utility reports on a database that is out of
sync with the compression control attribute of the library, the utility should also report if the
database is locked by another user or isn’t writable.
The utility must be robust as possible when system errors occur. For example, when writing a database
and the disk becomes full, the utility must recover from the error and leave the original database file in
place. It may have to abort operations altogether, but it should never corrupt the contents of any
databases it was processing at the time the system error occurred.
DEPENDENCIES
The utility depends on the capability to read and write OpenAccess databases in either a compressed or
uncompressed form. Because the utility is to be shipped in Cadence application releases that use older
versions of OpenAccess, the utility must be built using static versions of the 22.42 OpenAccess libraries.
PUBLIC INTERFACES
The utility will utilize the common infrastructure used to build other OpenAccess executables so that the
command line argument specification will be similar and consistent in that regard.
2.5 Data Compatibility
This functionality addresses the requirements described in Section 2.6 of the Requirements Specification.
DESCRIPTION
OpenAccess will fail to read a database if there is a compatibility issue with it due to an existence of a
feature that is not recognized by a version of OpenAccess. OpenAccess will also fail to read any file that
doesn’t appear to be an OpenAccess database file. For example, if the user replaces a layout.oa file with
a zero-sized file with the same name, OpenAccess will throw this error:
ERROR: (OA‐1006): Open of testlib/cell/layout failed ‐ primary file for design is invalid. If the user replaces the layout.oa file with any kind of non-OpenAccess file of the same name (e.g., like a
regular text file), OpenAccess will throw this error:
ERROR: (OA‐131): Corrupted database encountered: design testlib/cell/layout OpenAccess will allow a caller to create an instance of a master that contains an empty or invalid
database. The instance will remain unbound. When a design is opened that has an instance whose
master is an empty or invalid database, no error is produced by OpenAccess. In fact, oaDesign::openHier
will not generate any kind of error; the instance will simply remain unbound. The binding of any
oaRefHeader (instance or via) will implicitly catch any error from opening the master design and simply
add the header to the unbound list of headers. The behavior will be the same if an older version of
OpenAccess is used to open a design that contains instances to masters whose databases are in
compressed form. There are no plans to change this behavior at this time because to propagate an error
to the caller in the middle of the binding sequence would leave data in an intermediate and unusable
state.
When an older version of OpenAccess reads a database that contains compressed data, it may read part
of the database and then abort when it encounters the compressed internal data because the data may
appear to be corrupted. This behavior is not desirable because the user could lose valuable work before
knowing that the database was not truly readable by the version of OpenAccess he was using.
There is some appeal to try to define a compressed database as a new feature of data model 4, but the
feature compatibility checking is currently based around features between data models and not features
of a given data model. So trying to use the oaFeature/oaCompatibilityError mechanisms could only work if
a rework of the compatibility checking was done.
Another approach would be to take advantage of the fact that each OpenAccess database type supports
a separate on-disk schema revision number. Currently, this is a static number per database that can span
data model revisions (i.e., a design that is data model 0 could have the same on-disk schema rev as a
design that is data model 4). If the mechanism was enhanced to be a run-time value instead of a static
value, compressed design databases could be marked as having a different on-disk schema revision.
OpenAccess would be enhanced so that newer versions of OpenAccess would be able to interpret both
compressed and uncompressed on-disk schema revision values. Older versions of OpenAccess will
refuse to open a database whose on-disk schema revision doesn’t match what the software supports.
Hence, no changes are required in the older branches of OpenAccess code. The error message
produced in this case will look like the following:
ERROR: (OA‐14): testlib/cell/layout database revision (100) is not compatible with current revision (99) New documentation will have to be added to the OpenAccess programmer’s guide describing what the
difference is between design database revisions 99 and 100.
In addition, OpenAccess version 22.41 should be enhanced to detect a compressed database and issue
an error that is much more specific and descriptive in nature (descriptive enough to suggest running the
compression utility to decompress the data). Introducing a new design database revision as suggested
above satisfies the requirement that older versions of OpenAccess will fail with a data compatibility
message and will not crash.
TRANSLATORS
The OpenAccess translators may open databases explicitly to update them or implicitly when referencing
them. When the translators from the 22.41 release try to update a design database that was saved in
compressed form, the translator will issue the OA-14 incompatible database error. This will change when
22.41 is eventually updated to throw a more specific error message about trying to access compressed
data. As mentioned earlier, if a translator is updating or creating data that references a design that
happens to be saved in compressed form, there is no error message produced per se; the instances (or
custom vias) will be unbound.
2.6 Translator Support of Compressed Databases
This functionality addresses the requirements described in Section 2.8 of the Requirements Specification. In
general, all utilities that can create a library should support an option allowing the user to specify that the
OpenAccess databases written to the library are written in compressed form.
DESCRIPTION
The OpenAccess translators that create a library will be updated with an option that specifies that the new
library will contain OpenAccess databases saved in compressed form. This option should be
encapsulated in the oaUtil::TranslatorOptions class to make the specification of the command line option
consistent across the applications that use it. Work is likely required across all of the OpenAccess
translators except for the ones mentioned below. The translators include the following (their
corresponding translator component is listed in parenthesis if applicable):
•
def2oa (oaLefDef::DefIn)
•
strm2oa (oaStrm::StrmIn)
•
verilog2oa (oaVerilog::VerilogIn)
The translators that will not be updated support the new command line option are:
•
oa20to22
•
spef2oa
The proposed command line options are as follows:
•
‐compress : This option specifies that the new library created during translation will have
OpenAccess designs saved in compressed form. The default compression level will be used.
•
‐compressLevel <level>: This option specifies that the new library created during translation
will have OpenAccess designs saved in compressed form using the specified compression level.
The level value must be in the range 1-9.
USAGE
When one of the designated translators creates a new library as part of the translation process, the user
must have the ability to specify that the designs written to that library will be saved in compressed form.
For those translators that support updating the contents of an existing library, specifying the “‐compress”
or “‐compressLevel” options will result in an error if specified when the user specifies update mode.
When designs are written to the library in update mode, OpenAccess will obey the current setting of the
library attribute if specified.
2.7 Packaging
This section addresses the requirements described in Section 2.7 of the Requirements Specification.
DESCRIPTION
As mentioned earlier in this document, it is a requirement that the zlib used by OpenAccess not conflict
with any version of zlib that may be employed and linked into an application using OpenAccess. To
mitigate this, a snap-shot of the zlib source code would be taken and scoped to a C++ namespace and
built along with the OpenAccess reference implementation.
To avoid requiring applications to modify Makefile load library specifications, the OpenAccess zlib variant
should be compiled into an existing library archive like the one produced for the oaBase package. This
would mean that the OpenAccess zlib variant would be private OpenAccess functionality for use within
the OpenAccess core. An encapsulating class could be defined to encapsulate the zlib functionality
required for reading and writing the compressed data stream.
Also mentioned earlier in this document, the compression utility must be linked with static OpenAccess
22.42 libraries since the utility could be use with older applications that use OpenAccess 22.04 or 22.41.
DEPENDENCIES
The Makefiles in the OpenAccess core code packages must be updated to create the static versions of
the library archives. The additional work of linking these static libraries will lengthen the build times for
OpenAccess. One idea to mitigate the affect on build times is to build the static libraries and the
compression utility in a later part of the build process. The compression utility would have its own set of
tests so its build and test run could be done in parallel with latter parts of the OpenAccess build.
The static libraries will not be included in the run-time OpenAccess kit.
3.
Technical Risks and Dependencies
3.1 Technical Risks
The compression and decompression of data will be done at a fairly low-level of OpenAccess. Thorough
testing using databases with different mixes of data must be done to insure robustness of the functionality.
The capability to write and read compressed data should not introduce any inconsistencies in the data nor
prevent the ability of OpenAccess from reading the data from disk.
3.2 Technical Dependencies
As mentioned earlier in the document, OpenAccess will have a dependency on zlib functionality. To avoid
the use of conflicting zlib implementations within an application, OpenAccess would clone the zlib source
(available for use as long as the cloned source acknowledges the source and notes it is not the original) and
scope it in an OpenAccess C++ namespace. The OpenAccess zlib will be contributed along with the
OpenAccess reference implementation code to Si2.
© Copyright 2026 Paperzz