Old Posts:
09.04
10.04
11.04
12.04
01.05
02.05
03.05
04.05
05.05
06.05
07.05
08.05
09.05
10.05
11.05
12.05
01.06
02.06
03.06
04.06
05.06
06.06
07.06
08.06
09.06
10.06
11.06
12.06
01.07
02.07
03.07
04.07
05.07
06.07
08.07
09.07
10.07
11.07
12.07
01.08
02.08
03.08
04.08
05.08
06.08
07.08
08.08
09.08
10.08
11.08
12.08
01.09
02.09
03.09
04.09
05.09
06.09
07.09
08.09
09.09
10.09
11.09
12.09
01.10
02.10
03.10
04.10
05.10
06.10
07.10
08.10
09.10
10.10
11.10
12.10
01.11
02.11
03.11
04.11
05.11
07.11
08.11
09.11
10.11
11.11
12.11
01.12
02.12
03.12
04.12
05.12
06.12
07.12
08.12
09.12
10.12
11.12
12.12
01.13
02.13
03.13
04.13
05.13
06.13
07.13
08.13
09.13
10.13
11.13
12.13
01.14
02.14
03.14
04.14
05.14
06.14
07.14
09.14
10.14
11.14
12.14
01.15
02.15
03.15
04.15
06.15
09.15
12.15
01.16
03.16
04.16
05.16
06.16
07.16
08.16
09.16
12.16
01.17
02.17
03.17
04.17
06.17
07.17
08.17
09.17
10.17
12.17
01.18
02.18
03.18
04.18
05.18
06.18
07.18
08.18
09.18
11.18
12.18
01.19
02.19
05.19
06.19
08.19
10.19
11.19
05.20
10.20
11.20
12.20
09.21
11.21
12.22
Thursday, August 28, 2008
Re-creating Oracle Enterprise Manager's Console Repository
An earlier blog post,
Oracle 10g: Setting up The Oracle Enterprise Manager Console, has the detailed instructions for setting up the Oracle 10
g OEM console. However if the database along with the RDBMS server was moved (copied over) to a different host, Oracle Enterprise Manager (OEM) refuses to start. In such cases, the error message will be similar to the following.
% emctl start dbconsole
OC4J Configuration issue. /export/pspp/oracle/oc4j/j2ee/OC4J_DBConsole_localhost_test not found.
One simple solution (
there might be many) to fix this issue is to re-create the repository for the OEM console. The steps are as follows.
- Make sure the database instance and the listener are up.
- Unlock
sysman
and sys
user accounts if not unlocked already.
eg.,
% sqlplus / as sysdba
SQL*Plus: Release 10.2.0.3.0 - Production on Thu Aug 27 23:42:24 2008
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
SQL> alter user sysman identified by manager account unlock;
User altered.
SQL> alter user sys identified by manager account unlock;
User altered.
SQL> quit
- Re-create the repository.
eg.,
% emca -config dbcontrol db -repos recreate
STARTED EMCA at Aug 27, 2008 11:43:09 PM
EM Configuration Assistant, Version 10.2.0.1.0 Production
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Enter the following information:
Database SID: test
Listener port number: 1521
Password for SYS user: manager
Password for DBSNMP user: dbsnmp
Password for SYSMAN user: manager
Email address for notifications (optional):
Outgoing Mail (SMTP) server for notifications (optional):
-----------------------------------------------------------------
You have specified the following settings
Database ORACLE_HOME ................ /export/pspp/oracle
Database hostname ................ matrix
Listener port number ................ 1521
Database SID ................ test
Email address for notifications ...............
Outgoing Mail (SMTP) server for notifications ...............
-----------------------------------------------------------------
Do you wish to continue? [yes(Y)/no(N)]: Y
Aug 27, 2008 11:43:28 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /export/pspp/oracle/cfgtoollogs/emca/test/emca_2008-08-27_11-43-09-PM.log.
Aug 27, 2008 11:43:31 PM oracle.sysman.emcp.EMReposConfig dropRepository
INFO: Dropping the EM repository (this may take a while) ...
...
...
You are done. As simple as that.
_______________
Technorati Tags:
Oracle |
Database |
Oracle Enterprise Manager |
OEM
Monday, August 18, 2008
Fix to Firefox 3 Crash on Solaris 10 x86
Symptom:
Firefox 3 crashes on Solaris 10 x86 when the web browser tries to render some of the HTML pages with SWF content in them. For example, I was not able to open the home page of
CNN with the Adobe Flash Player 9.0.124.0 plug-in installed on top of Firefox 3.
According to Adobe's TechNote kb404004,
Adobe Flash Player 9.0.124.0 may cause Firefox 3 to crash on Solaris operating systems, the crash is encountered when a HTML page with SWF content renders in a non-current tab or when a tab containing SWF content is closed.
Solution:
The same TechNote kb404004 suggests installing Flash Player version 9.0.125.0 to resolve the issue on Solaris operating system (although I have not tested on SPARC, it appears to be an issue on both SPARC and x86/x64 versions).
Download URL for Flash Player version 9.0.125.0:
http://www.adobe.com/shockwave/download/alternates/After installing the Flash Player version 9.0.125.0, Firefox 3 did not crash when rendering the home page of CNN; and the plug-in seems stable.
[Update 08/28/2008] Even though the web browser is somewhat stable compared to the vanilla web browser on Solaris 10 x86/x64 Nevada build 95, it is definitely not rock solid as it kept crashing intermittently. The crashes are so random that I don't know when it is going to crash. So I bid adieu to Firefox 3 and went back to
Firefox 2.0.0.16. 2.0.0.16 is much stable; and now there is one less thing to worry about. [/Update 08/28/2008]
_______________
Technorati Tags:
Solaris |
OpenSolaris |
Adobe |
Flash Player
Thursday, August 07, 2008
Setting Up a Simple MySQL Cluster
The goal of this blog post is to show the steps involved in configuring a very simple MySQL Cluster with all the nodes running on one machine. Keep in mind that this is not a realistic scenario; and the purpose is just to introduce various components involved in a MySQL Cluster, and to show how to configure them. MySQL Cluster beginners, novice users are the target audience of this blog post.
Brief overview of MySQL ClusterMySQL Cluster is a high-availability, high-redundancy version of MySQL adapted for the distributed computing environment. MySQL Cluster integrates the standard MySQL server with an in-memory clustered storage engine called NDB. MySQL Cluster uses a 'shared-nothing' architecture that allows the system to work with inexpensive hardware, with a minimum of specific requirements for hardware or software; and provides 99.999% data availability with no single point of failure. Even though MySQL Cluster is often used as an in-memory database, it can be configured to use disk-based data as well.
Steps for configuring the MySQL Cluster- Download the appropriate package/tar file/source code for your platform from the url:
http://dev.mysql.com/downloads/cluster/index.html
In this blog post, I'm assuming that the MySQL Cluster will be setup on a single system running 32-bit Solaris 10 x86. So download the Solaris (TAR packages) for Solaris 10 (x86, 32-bit).
- Create
mysql
group, mysql
user.
# groupadd mysql
# useradd -g mysql mysql
- As root user, extract mysql-cluster-gpl-6.2.15-solaris10-i386.tar.gz under /usr/local directory.
- Create a symbolic link to the newly created
mysql
directory so it will be easier to work with.
% ln -s mysql-cluster-gpl-6.2.15-solaris10-i386 mysql
Let's pretend /usr/local/mysql as the MySQL_HOME here onwards.
- MySQL initially has no databases installed. Running MySQL_HOME/scripts/mysql_install_db will install the
mysql
and test
databases. Besides, mysql_install_db
program sets up the initial MySQL grant tables containing the privileges that determine how users are allowed to connect to the server. mysql_install_db
needs to be run only the first time you install the MySQL Cluster.
% cd <MySQL_HOME>
% scripts/mysql_install_db
Installing MySQL system tables...
OK
Filling help tables...
OK
To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
./bin/mysqladmin -u root password 'new-password'
./bin/mysqladmin -u root -h unknown password 'new-password'
Alternatively you can run:
./bin/mysql_secure_installation
..
..
- Change the ownership of MySQL directories to ensure a level of security when running MySQL.
eg., As root user:# chown -R root:mysql <MySQL_HOME>
# chown -R mysql:mysql <MySQL_HOME>/data
- Add the MySQL bin directory to PATH and lib directory to the LD_LIBRARY_PATH.
% cat .profile
PATH=/usr/bin:/usr/sbin:/usr/local/mysql/bin:/usr/ccs/bin:/usr/ucb:/usr/sfw/bin:/usr/local/bin:.:$PATH
export PATH
LD_LIBRARY_PATH=/export/software/mysql/mysql/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
- For ease of use, copy the
mysql.server
script from <MySQL_HOME>/support-files directory to <MySQL_HOME>/bin directory.
% cp <MySQL_HOME>/support-files/mysql.server <MySQL_HOME>/bin
- Start the MySQL server
eg., As root user, run:
# cd /usr/local/mysql
# ./mysql.server start
Starting MySQL
. SUCCESS!
- By default, there is no password for the MySQL 'root' user (not the OS 'root' user). Setup the password for the 'root' user as shown below.
# cd /usr/local/mysql
# ./mysqladmin -u root password admin
- Ensure that the MySQL server is running as expected by connecting with the
mysql
client.
# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.1.23-ndb-6.2.15-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| test |
+--------------------+
3 rows in set (0.14 sec)
mysql> quit
Bye
- Install/create the
world
database.
Download location:
http://dev.mysql.com/doc/
# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.1.23-ndb-6.2.15-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> CREATE DATABASE world;
Query OK, 1 row affected (0.00 sec)
Select the world database as the default database.
mysql> USE world;
Database changed
Create the tables and load the data by executing world.sql
.
mysql> SOURCE world.sql
...
Query OK, 1 row affected (0.00 sec)
Query OK, 1 row affected (0.00 sec)
Query OK, 1 row affected (0.00 sec)
...
Check the tables in the world
database.
mysql> SHOW TABLES;
+-----------------+
| Tables_in_world |
+-----------------+
| City |
| Country |
| CountryLanguage |
+-----------------+
3 rows in set (0.00 sec)
- Create the data directory for the cluster.
eg.,# mkdir /usr/local/mysql/mysql-cluster
- Configure the Management/MGM Node
Create /etc/config.ini with the following contents.
# Options affecting NDBD processes on all Data Nodes
[NDBD DEFAULT]
NoOfReplicas=2 # Number of replicas
DataDir=/usr/local/mysql/mysql-cluster # Directory for the data files on all the data nodes
DataMemory=15M # How much memory to allocate for data storage
IndexMemory=15M # How much memory to allocate for index storage
# Since the "world" database takes up only about 500KB,
# the configured values for DataMemory and IndexMemory
# should be more than enough for this example Cluster setup
# TCP/IP options:
[TCP DEFAULT]
portnumber=2202 # This the default; however, you can use any
# port that is free for all the hosts in the cluster
# MANAGEMENT (MGM) NODE
[NDB_MGMD]
HostName=localhost # Hostname or IP address of MGM node
DataDir=/usr/local/mysql/mysql-cluster # Directory for MGM node log files
# DATA NODES
[NDBD]
HostName=localhost # Hostname or IP address of the Data Node
[NDBD]
HostName=localhost # Hostname or IP address of the Data Node
# SQL NODE
[MYSQLD]
HostName=localhost # Hostname or IP address of the SQL or API Node
This configuration assumes that there will be one management node, two data nodes and one SQL node.
config.ini
file is read-only by the MySQL Cluster management server, which then distributes the information contained therein to all processes participating in the cluster. config.ini
contains a description of each node involved in the cluster -- this includes configuration parameters for data nodes and configuration parameters for connections between all nodes in the cluster.
The config.ini
file can be placed anywhere on the file system i.e., need not be located in the /etc directory.
- Configure the Data/Storage and SQL/API Nodes
The my.cnf
file is needed for the data, SQL nodes; and it should be located in the /etc directory.
Create /etc/my.cnf
configuration file with the following contents:
# Options for mysqld process
[MYSQLD]
ndbcluster # run NDB storage engine i.e., set up the local MySQL
# server to utilize MySQL Cluster (ndbcluster)
ndb-connectstring=localhost # location of management server
[MYSQL_CLUSTER]
ndb-connectstring=localhost # location of management server
# Options for ndbd process
[NDBD]
ndb-connectstring=localhost # location of management server
Note:
Once mysqld
process is started with the ndbcluster
and ndb-connectstring
parameters in the [MYSQLD]
section of the my.cnf
file, it is not possible to execute any CREATE TABLE or ALTER TABLE statements without starting the cluster. Otherwise, those statements will fail with an error.
- Start the MySQL Cluster
Each cluster node process must be started separately, and on the host where it resides. The management node should be started first, followed by the data nodes, then finally by any SQL nodes.
- Start the management node
On the management host, run the following commands to start the management node process.
# cd /usr/local/mysql/bin
# ./ndb_mgmd -f /etc/config.ini
Note that ndb_mgmd must be told where to find the configuration file, using the -f
or --config-file
option.
Check the status of the cluster by connecting to the management console, ndb_mgm
.
# cd <MySQL_HOME>/bin
# ./ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from localhost)
id=3 (not connected, accepting connect from localhost)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @localhost (mysql-5.1.23 ndb-6.2.15)
[mysqld(API)] 1 node(s)
id=4 (not connected, accepting connect from localhost)
ndb_mgm> quit
- On each of the data node hosts, run the following command to start the
ndbd
process for the first time.
# ndbd --initial
Keep in mind that it is necessary to use the --initial
parameter only when starting ndbd
for the first time, or when restarting after a backup/restore operation or a configuration change. This is because the --initial
option causes the node to delete any files created by earlier ndbd
instances that are needed for recovery, including the recovery log files. An exception to this is that --initial
does not delete disk data files. If you do need to perform an initial restart of the cluster, you must delete any existing disk data log files and data files manually.
Start the second data node using the same previous command.
Check the status of the cluster again by connecting to the management console.
# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0, Master)
id=3 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15)
[mysqld(API)] 1 node(s)
id=4 (not connected, accepting connect from localhost)
- Finally (re)start the MySQL Server.
# ./mysql.server stop
Shutting down MySQL
... SUCCESS!
# ./mysql.server start
Starting MySQL
.. SUCCESS!
Check the status of the cluster one more time by connecting to the management console
# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0, Master)
id=3 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15)
[mysqld(API)] 1 node(s)
id=4 @127.0.0.1 (mysql-5.1.23 ndb-6.2.15)
At this point, the whole cluster is up and running.
- Testing the cluster functionality
Alter the City
table in the world
database to use the NDB (cluster) storage engine, shutdown the cluster; finally try querying the City
table.
- Connect to the MySQL Server using the
mysql
client.
# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.1.23-ndb-6.2.15-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
- Query the
City
table
mysql> USE world;
Database changed
mysql> SELECT * FROM City;
+------+-----------------------------------+-------------+----------------------+------------+
| ID | Name | CountryCode | District | Population |
+------+-----------------------------------+-------------+----------------------+------------+
| 1 | Kabul | AFG | Kabol | 1780000 |
| 2 | Qandahar | AFG | Qandahar | 237500 |
| 3 | Herat | AFG | Herat | 186800 |
...
...
| 4077 | Jabaliya | PSE | North Gaza | 113901 |
| 4078 | Nablus | PSE | Nablus | 100231 |
| 4079 | Rafah | PSE | Rafah | 92020 |
+------+-----------------------------------+-------------+----------------------+------------+
4079 rows in set (0.05 sec)
- Check the storage engine in use
mysql> SHOW TABLE STATUS LIKE 'City'\G
*************************** 1. row ***************************
Name: City
Engine: MyISAM
Version: 10
Row_format: Fixed
Rows: 4079
Avg_row_length: 67
Data_length: 273293
Max_data_length: 18858823439613951
Index_length: 43008
Data_free: 0
Auto_increment: 4080
Create_time: 2008-08-02 11:52:22
Update_time: 2008-08-02 11:52:27
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.04 sec)
- Change the storage engine to NDB (cluster engine).
mysql> ALTER TABLE City ENGINE=NDB;
Query OK, 4079 rows affected (2.41 sec)
Records: 4079 Duplicates: 0 Warnings: 0
- Check again the storage engine in use
mysql> SHOW TABLE STATUS LIKE 'City'\G
*************************** 1. row ***************************
Name: City
Engine: ndbcluster
Version: 10
Row_format: Fixed
Rows: 4079
Avg_row_length: 92
Data_length: 393216
Max_data_length: 0
Index_length: 0
Data_free: 0
Auto_increment: 4080
Create_time: NULL
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.05 sec)
- View the records in the
City
table one more time using the same command used in step 18.2
mysql> SELECT * FROM City;
+------+-----------------------------------+-------------+----------------------+------------+
| ID | Name | CountryCode | District | Population |
+------+-----------------------------------+-------------+----------------------+------------+
| 2803 | Ugep | NGA | Cross River | 102600 |
| 965 | Tegal | IDN | Central Java | 289744 |
| 336 | Marab? | BRA | Par? | 167795 |
| 1766 | Konan | JPN | Aichi | 95521 |
...
...
| 3158 | Ratingen | DEU | Nordrhein-Westfalen | 90951 |
| 1057 | Gwalior | IND | Madhya Pradesh | 690765 |
| 2958 | Opole | POL | Opolskie | 129553 |
| 3213 | Maribor | SVN | Podravska | 115532 |
+------+-----------------------------------+-------------+----------------------+------------+
4079 rows in set (0.22 sec)
The query still returned 4079 rows just like in step 17.2. However notice that the returned rows are not in sorted order like in step 17.2. This is mainly because the data was spread into two data nodes; and the MySQL Server is retrieving and showing the rows in the same order that it received from the data nodes.
- Stop the cluster
# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHUTDOWN
Connected to Management Server at: localhost:1186
Node 2: Cluster shutdown initiated
Node 3: Cluster shutdown initiated
Node 2: Node shutdown completed.
2 NDB Cluster node(s) have shutdown.
Disconnecting to allow management server to shutdown.
Node 3: Node shutdown completed.
ndb_mgm>
- Finally re-run the query that was used in step 17.2
mysql> SELECT * FROM City;
ERROR 1296 (HY000): Got error 157 'Unknown error code' from NDBCLUSTER
mysql> SHOW ERRORS;
+-------+------+----------------------------------------------------+
| Level | Code | Message |
+-------+------+----------------------------------------------------+
| Error | 1296 | Got error 4009 'Cluster Failure' from NDB |
| Error | 1296 | Got error 157 'Unknown error code' from NDBCLUSTER |
| Error | 1033 | Incorrect information in file: './world/City.frm' |
+-------+------+----------------------------------------------------+
3 rows in set (0.05 sec)
mysql> SELECT * FROM CountryLanguage;
+-------------+---------------------------+------------+------------+
| CountryCode | Language | IsOfficial | Percentage |
+-------------+---------------------------+------------+------------+
| AFG | Pashto | T | 52.4 |
| NLD | Dutch | T | 95.6 |
| ANT | Papiamento | T | 86.2 |
| ALB | Albaniana | T | 97.9 |
...
...
| CHN | Dong | F | 0.2 |
| RUS | Belorussian | F | 0.3 |
| USA | Portuguese | F | 0.2 |
+-------------+---------------------------+------------+------------+
984 rows in set (0.01 sec)
mysql> SHOW TABLE STATUS LIKE 'CountryLanguage'\G
*************************** 1. row ***************************
Name: CountryLanguage
Engine: MyISAM
Version: 10
Row_format: Fixed
Rows: 984
Avg_row_length: 39
Data_length: 38376
Max_data_length: 10977524091715583
Index_length: 23552
Data_free: 0
Auto_increment: NULL
Create_time: 2008-08-02 11:52:27
Update_time: 2008-08-02 11:52:28
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set, 1 warning (0.00 sec)
Because the cluster is not running and now that the data was stored in the cluster, the query failed to return any data. ERROR 1296 is kind of obscure -- to see some sensible error message, check the ERRORS. Since the rest of the tables are still stored locally using the MyISAM storage engine, MySQL Server is able to retrieve and show the data for the remaining tables in the world
database.
Acknowledgments:
Sarah Sproehnle, Sun | MySQL AB
___________________
Technorati Tags:
Sun Microsystems |
MySQL |
Cluster |
Database |
High Availability
Sunday, August 03, 2008
Solaris 9, 10 : Broken X11 Forwarding on IPv4-only Systems
SymptomAll Solaris 9 & 10 IPv4-only systems with the following ssh patches (or later) may prevent ssh from X forwarding.
126133-03 SunOS 5.10: sshd patch
126134-03 SunOS 5.10_x86: sshd patch
114356-14 SunOS 5.9: /usr/bin/ssh patch
114357-13 SunOS 5.9_x86: usr/bin/ssh patch
In other words, the users may not be able to bring up the graphical applications when connected remotely with the
-X
option of ssh. All such failed attempts will be recorded in the system log (
/var/adm/messages
) with an error message similar to the following:
sshd[882]: [ID 800047 auth.error] error: Failed to allocate internet-domain X11 display socket
FixThere is no official fix available at this time. A bug,
6704823 Fix for 6684003 prevents ssh from X forwarding on IPv4-only system, was filed against Solaris/ssh; and the fix is in progress at the moment.
Work aroundSuggested work arounds from the bug report (
you just need one of 'em, not both):
- Add lo0 for IPv6.
eg.,
# ifconfig lo0 inet6 plumb up
And to make changes permanent, do:
# touch /etc/hostname6.lo0
- Disable IPv6 ssh support.
Change ListenAddress
to 0.0.0.0
in /etc/ssh/sshd_config; then restart sshd
with -4
option.
eg.,
# svcadm -v disable ssh
# /usr/lib/ssh/sshd -4
_________________________
Technorati Tags:
Solaris |
OpenSolaris
2004-2019 |
|
|