Sybase跨平台dump/load 操作说明

ASE 12.5.3以上的版本可以进行跨平台dump/load



一.执行的 dbcc checkdb或任何其他dbcc命令来校验数据库运行干净
$ isql -Usa -P -SASE125
1> dbcc checkdb("tec")
2> go
Checking tec: Logical pagesize is 2048 bytes
Checking sysobjects: Logical pagesize is 2048 bytes
The total number of data pages in this table is 32.
Table has 336 data rows.
……………………………………………………….
……………………………………………………….
Checking tgh_jx: Logical pagesize is 2048 bytes
The total number of data pages in this table is 1.
Table has 25 data rows.
Checking gy_con_area: Logical pagesize is 2048 bytes
The total number of data pages in this table is 1.
Table has 17 data rows.
Checking gy_user_work_card_s1: Logical pagesize is 2048 bytes
The total number of data pages in this table is 3.
Table has 106 data rows.
DBCC execution completed. If DBCC printed error messages, contact a user with
System Administrator (SA) role.

二.使用sp_dboption设置数据为单用户模式
1> use master
2> go
1> sp_dboption tec,"single user",true
2> go
Database option 'single user' turned ON for database 'tec'.
Running CHECKPOINT on database 'tec' for option 'single user' to take effect.
(return status = 0)

三.使用sp_flushstats将内存中存储的统计信息刷新到systabstats系统表。你必须至少等 待10秒钟,等待进程完成
1> use tec
2> go
1> sp_flushstats
2> go
DBCC execution completed. If DBCC printed error messages, contact a user with
System Administrator (SA) role.
DBCC execution completed. If DBCC printed error messages, contact a user with
System Administrator (SA) role.
(return status = 0)

四.运行checkpoint命令 将所有脏页(自上次写入以来被更新的页)写入到数据库设备。你必须至少等待10秒钟,等待进程完成。
1> checkpoint
2> go

五.运行卸载数据库。
1> use master
2> go
1> dump database tec to "/data_backup/tec0912zm"
2> go
Backup Server session id is:  4.  Use this value when executing the
'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.41.1.1: Creating new disk file /data_backup/tec0912zm.
Backup Server: 6.28.1.1: Dumpfile name 'tec0825608D59    ' section number 1
mounted on disk file '/data_backup/tec0912zm'
Backup Server: 4.58.1.1: Data base tec: 14644 kilobytes DUMPED.
Backup Server: 4.58.1.1: Database tec: 36278 kilobytes DUMPED.
Backup Server: 4.58.1.1: Database tec: 57912 kilobytes DUMPED.
…………………………………………………………………………….
Backup Server: 4.58.1.1: Database tec: 585206 kilobytes DUMPED.
Backup Server: 4.58.1.1: Database tec: 586744 kilobytes DUMPED.
Backup Server: 3.43.1.1: Dump phase number 1 completed.
Backup Server: 3.43.1.1: Dump phase number 2 completed.
Backup Server: 3.43.1.1: Dump phase number 3 completed.
Backup Server: 4.58.1.1: Database tec: 586752 kilobytes DUMPED.
Backup Server: 3.42.1.1: DUMP is complete (database tec).

六.取消sp_dboption单用户模式设置
1> sp_dboption tec,"single user",false
2> go
Database option 'single user' turned OFF for database 'tec'.
Running CHECKPOINT on database 'tec' for option 'single user' to take effect.
(return status = 0)

(必须在master 库  更改database options
You must be in the 'master' database in order to change database options.)

七.将dump文件拷贝到windows -ASE 服务器中
1.可使用ftp拷贝
2.也可以使用ftp工具 拷 贝


八.将windows-ASE服务器中建设备、建数据库
disk init  name  = 'webquery_data',
physname  = 'c:/sybase/data/webquery_data.dat',
size  = '300M',
go

disk init  name  = 'webquery_log',
physname  = 'c:/sybase/data/webquery_log.dat',
size  = '90M',
go

九.建数据库
CREATE DATABASE cpy_webquery
            ON webquery_data = '300M'
        LOG ON webquery_data = '90M'
go

十.Load数据库
load database cpy_webquery from "d:/data_backup/webquery090317.dat"

十一.数据库联机
online database cpy_webqeury

十二.重编译索引
sp_post_xpload  


以下 为官方文档 原 文:
Dump and Load a Database Across Platforms
Adaptive Server Enterprise supports both the big endian and little endian platforms.
Overview
Adaptive Server Enterprise, version 12.5.2, allowed the dump and load of databases across platforms with the same endianness architecture.
With Adaptive Server Enterprise, version 12.5.3, the dump and load databases across platforms can now be done with different endianness architecture. This means that a dump database and load database can be done from a big endian platform to a little endian platform and from a little endian platform to a big endian platform.
A big endian platform is where the most significant byte is with the lowest address. The little endian platform is where, within a given 16 or 32 bit word, bytes at the lower addresses have a lower significance.
There is no syntax change with dump or load database in version 12.5.3. Adaptive Server automatically detects the endian of the database dump file at the time of a load database, then performs the necessary conversions. Loads in an older version, such as 11.9 and 12.0, are also supported. The dump and load can be from 32 bit to 64 bit platforms, and vice versa.
Endian platforms
Platforms supported:

Big endian        Solaris 32/64        IBM 32/64        SGI 32/64        HPPA 64        HPIA 64        MAC 32

Little endian        Linux 1A 32        Linux 1A 64        NT        Sun X86
Dump and load across platforms with the same endian architecture
When dump database and load database are done across platforms with the same endian architecture, user and system data do not require conversions. There are no limitations on operations with the dump and load of a database. Adaptive Server Enterprise supports dump and load processes for transactions and databases across platforms.
Dump and load across platforms with different endian architecture
Dumping a database
Before you run dump database, use the following procedures to move the database to a transactional quiescent status:
1Verify the database runs cleanly by executing dbcc checkdb or any other dbcc command.
2To prevent concurrent updates from open transactions by other processes during the dump database, place the database in a single user mode with sp_dboption.
3Flush statistics to systabstats with sp_flushstats. You must wait for at least ten seconds for the process to complete.
4Run checkpoint against the database to flush updated pages. You must wait for at least ten seconds for the process to complete.
5Run dump database.
Loading a database
Once you load the database, Adaptive Server automatically identifies the endian type on the dump file and performs all necessary conversions during the load database and online database.
Note When Adaptive Server converts the order for the index rows some may be incorrect, you must recreate the indexes after loading the database. See “sp_post_xpload” on page 27 for rebuilding indexes.
Restrictions
•Remote dump transaction and load transaction to or from a back up server are not supported.
•A password protected dump file cannot be loaded across platforms.
•dump transaction and load transaction is not allowed across platforms.
•If you dump database and load database for a parsed XML object, you must parse the text again after the load database is completed.
•You cannot perform the dump database and load database across platforms on Adaptive Servers version earlier than 11.9.
•Embedded data structures stored as binary, varbinary, or image columns are not done because Adaptive Server cannot translate these structures.
•When you dump and load a master database, you must recreate all logins in the syslogins because passwords are incompatible between platforms.
•Reset the password using the command line argument -psa on a master database after the load database is completed.

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
json.dump和json.load是Python中用于进行JSON数据的序列化和反序列化的方法。json.dump用于将dict类型的数据转换为str,并将其写入到json文件中,可以使用with open()的方式或者直接使用json.dump()将数据写入文件。例如: ```python import json data = {'name': 'John', 'age': 30, 'city': 'New York'} filename = 'data.json' # Solution 1 json_str = json.dumps(data) with open(filename, 'w') as f: f.write(json_str) # Solution 2 json.dump(data, open(filename, 'w')) ``` 而json.load则用于从json文件中读取数据,将其解析为Python数据结构。例如: ```python import json filename = 'data.json' # Solution 1 with open(filename, 'r') as f: json_str = f.read() data = json.loads(json_str) # Solution 2 data = json.load(open(filename, 'r')) ``` 这样就可以将json文件中的数据读取出来并转换为Python的dict类型了。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [json.load json.dump 和 json.loads json.dumps 全解析](https://blog.csdn.net/cl965081198/article/details/94135502)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [json.dumps()和json.loads()、json.dump()和json.load()的区分](https://blog.csdn.net/weixin_62848630/article/details/124637501)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [Python基础详解(十五):json.dump()、json.dumps()、json.load()、json.loads()](https://blog.csdn.net/zhu_rui/article/details/123025943)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值