Step1 : sudo apt-get install gcin

Step2 : im-switch -s gcin

Step3 : re-login

Step4: wget http://edt1023.sayya.org/gcin/noseeing-12.tar.gz

Step5: tar zxvf noseeing-12.tar.gz

Step6: mv noseeing.gtab ~/.gcin

Step7: restart computer

Step8: 之後,可以在Gcin的圖示上,按下【滑鼠右鍵】→【設定】,並選擇【內定輸入法 開啟/關閉】。

 

lbysdc 發表在 痞客邦 留言(0) 人氣()

The way to solve that ubuntu 13.10 cannot connect to A2DP BT devices...
bluez version: bluez [bluez (4.101-0ubuntu8b1)
pulseaudio version: 4.0

In /etc/bluetooth/audio.conf:
// add Enable=Source which located on the top of audio.conf
[General]
Enable=Source

// modify the value of HFP from true to false
HFP=false

// uncomment the following three lines which located
// in the bottom of audio.conf
[A2DP]
SBCSources=1
MPEG12Sources=0

If it's still not working, try below...
// Test by:
pactl list | grep -i module-bluetooth-discover

// If it's empty, load via below command
pactl load-module module-bluetooth-discover

Pulse audio will then (hopefully) recognize the device

Reference: https://bugs.launchpad.net/ubuntu/+source/bluez/+bug/1199059

lbysdc 發表在 痞客邦 留言(0) 人氣()

 sudo apt-get install scim

sudo apt-get install scim-tables-zh

download Liu.bin from here
http://km2.iiietc.ncu.edu.tw/xms/read_attach.php?id=1347

sudo cp /Downloads/Liu.bin /usr/share/scim/tables/Liu.bin

re-login or re-start computer

Ctrl+Space to invoke scim input method

blog-photo

lbysdc 發表在 痞客邦 留言(0) 人氣()

Python Socket Programming (Python3)

# server.py

from socket import *

# all available on host
myHost = ''

myPort = 50007
# create a TCP/IP socket object
sockobj = socket(AF_INET, SOCK_STREAM)
# Bind socket object to host/port
sockobj.bind((myHost, myPort))
# Allow 5 pending connects

sockobj.listen(5)

 

# always listen until process is killed
while True:
     # wait for next client connect
    connection, address = sockobj.accept()
     # simple print out the client's address
    print('Server connected by', address)
    while True:
          # read data from client socket
        data = connection.recv(1024)
        if not data: break
         # send a reply line back to client

        connection.send(b'Echo=>'+data)
    connection.close


 

# Client.py

 

import sys
from socket import *

serverHost = 'localhost'
serverPort = 50007
# Default message to server
message = [b'Hello network world']

if len(sys.argv) > 1:
     # First system argv indicate to the server host address
    serverHost = sys.argv[1]
      # second system argv replace the message to server
    if len(sys.argv) > 2:
        message = (x.encode() for x in sys.argv[2:])

# Create a TCP/IP socket object
sockobj = socket(AF_INET, SOCK_STREAM)
# Connect to host/server
sockobj.connect((serverHost, serverPort))
for line in message:
     # send message to server
    sockobj.send(line)
     # receive response data from server

    data = sockobj.recv(1024)
    print('Client received:', data)

 

sockobj.close()

 

 


 

Start Server:
tracy@tracyDesktop:~/Documents/python$ python3 testServer.py

Start Client:
tracy@tracyDesktop:~/Documents/python$ python3 testClient.py


 

Response of server:

tracy@tracyDesktop:~/Documents/python$ python3 testWeb.py
Server connected by ('127.0.0.1', 46030)
Tracy: b'Hello network world'
// Server still waiting for next client's connect

Response of client:

tracy@tracyDesktop:~/Documents/python$ python3 testWebClient.py
Client received: b'Echo=>Hello network world'



Start Client again
tracy@tracyDesktop:~/Documents/python$ python3 testClient.py "localhost" "I'M Tracy"


Response of server:

 

tracy@tracyDesktop:~/Documents/python$ python3 testWeb.py
Server connected by ('127.0.0.1', 46030)
Tracy: b'Hello network world'
Server connected by ('127.0.0.1', 46044)
Tracy: b"I'M Tracy"

// Server still waiting for next client's connect

Response of client:

tracy@tracyDesktop:~/Documents/python$ python3 testWebClient.py "localhost" "I'M Tracy"
Client received: b"Echo=>I'M Tracy"



lbysdc 發表在 痞客邦 留言(0) 人氣()

The two kinds of coprocessors
1. observers: allow the cluster to behave differently during normal client operations.

2. endpoints: allow you to extend the cluster's capabilities, exposing new operations to client applications.

Observers
How it works? That's take a look at the request lifecycle compares to the RegionObserver intercepted

Lifecycle of a request.
Life cycle of a request  

1.  Client sends a put request

2. Request is dispatched to appropriate RegionServer and region

3. The region receives the put(), processes it, and constructs a response.

4. The final result is returned to the client.

 

What a RegionServer do
Coprocessor  

1. Client sends a put request

2.  Request is dispatched to appropriate RegionServer and region.

3.  CoprocessorHost intercepts the request and invoices prePut() on each RegionObserver registered on the table.

4. Unless interrupted by a **prePut()**, the request continues to region and is processed normally.

5. The result produced by the region is once again intercepted by the CoprocessorHost. This time **postPut()** is called on each registered RegionObserver.

6. Assuming no postPut() interrupts the response, the final result is returned to the client.

There are three types of observers
- RegionObserver: This observer hooks into the stages of data access and manipulation. For example: Get, Put, Delete, Scan, and so on.
- WALObserver: The write-ahead log (WAL) also supports an observer coprocessor. The only available hooks are pre- and post-WAL write events.
- MasterObserver: For hooking into DDL events, such as table creation or schema modifications. For example: postDeleteTable().

Example of RegionObserver




package org.apache.hadoop.hbase.coprocessor.example;


import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.List;

import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.Durability;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
import org.apache.hadoop.hbase.coprocessor.ObserverContext;
import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
import org.apache.hadoop.hbase.util.Bytes;

// For using RegionObserver, we need to inherit BaseRegionObserver first
public class HelloWorldCoprocessor extends BaseRegionObserver{

    // We override prePut function
    // When there comes a put command, before we execute it, we run prePut first
    @Override
    public void prePut(
        ObserverContext<RegionCoprocessorEnvironment> e,
        Put put,
        WALEdit edit,
        Durability durability)
        throws IOException{
             // Here we try to create a folder named coprocessor on the hadoop file system before execute put()
            FileSystem fw = e.getEnvironment().getRegion().getFilesystem();
            fw.mkdirs(new Path("hdfs:///hbase/coprocessor"));
    }

    // We also try to override preGet function
    // When there comes a get command, before we execute it, we run preGet first
    @Override
    public void preGet(
    ObserverContext<RegionCoprocessorEnvironment> e,
        Get get,
        List<KeyValue> result)
        throws IOException{
            byte[] testme = Bytes.toBytes("r1");
            // if the row name we want to get equal to 'r1', then we add an extra key as output.
           if (Bytes.equals(get.getRow(), testme)) {
           KeyValue kv = new KeyValue(get.getRow(), testme, testme, Bytes.toBytes(System.currentTimeMillis()));
          result.add(kv);
          }
    }

}


 

Then we compile it to a .jar file
You can download the hbase source file and put your java file under below path:
hbase-0.96.0/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor.

Then execute **mvn package** under hbase-0.96.0/hbase-examples.

The .jar file will save to hbase-0.96.0/hbase-examples/target


 

Coprocessor Deployment
It provides two options for deploying coprocessor extensions

1. load from configuration: Which happens when the master or region servers start up

    If loaded in this manner, the coprocessors will be active on all regions of all tables.

2. load from table attribute: dynamic loading when the table is (re)opened.
 
    Configured to load on a per table basis. via shell command 'alter' + 'table_att'



# load from configuration, add below configuration into hbase-site.xml

# which name is set by the following rules
# hbase.coprocessor.region.classes: for RegionObservers and Endpoints
# hbase.coprocessor.master.classes: for MasterObservers
# hbase.coprocessor.wal.classes: for WALObservers

# value is the class name, if there are multiple classes specified for loading, the class names must be comma separated.

<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.coprocessor.example.HelloWorldCoprocessor</value>
</property>




***** ATTENTION *****
The jar file should be copied to each server of your cluster and placed in where hbase can see
For example: under $HBASE_HOME/lib/

After Configuration is set, you need to stop hbase and re-start it again by below command
stop-hbase.sh //stop hbase
start-hbase.sh //re-start hbase

***** ATTENTION *****



# load from shell
# This is per table basis

// Create a table named 'table1', with column family 'cf1'
hbase(main):014:0> create 'table1','f1'
0 row(s) in 0.4360 seconds
=> Hbase::Table - table1

// using alter shell command to setup coprocessor, you need to disable table first
hbase(main):015:0> disable 'table1'
0 row(s) in 1.3170 seconds

// 'coprocessor' => ' location of file | class name | priority | attrs'
hbase(main):017:0> alter 'table1', METHOD => 'table_att', 'coprocessor' => 'hdfs:///hbase/hbase-examples-v2.jar|org.apache.hadoop.hbase.coprocessor.example.HelloWorldCoprocessor|1001|'
Updating all regions with the new schema...
1/1 regions updated.
Done.
0 row(s) in 1.2470 seconds

// You can use shell command describe to check the settings
hbase(main):018:0> describe 'table1'
DESCRIPTION ENABLED
'table1', {TABLE_ATTRIBUTES => {coprocessor$1 => 'hdfs:///hbase/hbase-examples-v2.jar|org.apache false
.hadoop.hbase.coprocessor.example.HelloWorldCoprocessor|1001|'}, {NAME => 'f1', DATA_BLOCK_ENCOD
ING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => '
NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65
536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}
1 row(s) in 0.0420 seconds

# Remember to enable 'table1' after done
hbase(main):022:0> enable 'table1'
0 row(s) in 1.3650 seconds


 

*****PS. We haven't successful yet on setting coprocessor by loading from shell... Still trying :P

Test your coprocessor
- We will create a new file named 'coprocessor' under hdfs:///hbase/ before we execute put shell command.
- We will add an extra key value before we execute get command and the row name is 'r1'

Here we try our first prePut() function first.



# Make sure there is no file named 'coprocessor' under hdfs:///hbase/
ubuntu@ip-10-232-158-223:~$ hadoop fs -ls hdfs:///hbase
Found 9 items
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 06:34 hdfs:///hbase/.tmp
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 06:32 hdfs:///hbase/WALs
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 08:26 hdfs:///hbase/archive
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 06:32 hdfs:///hbase/corrupt
drwxr-xr-x - ubuntu supergroup 0 2013-12-19 10:05 hdfs:///hbase/data
-rw-r--r-- 3 ubuntu supergroup 99428 2013-12-26 05:00 hdfs:///hbase/hbase-examples-v2.jar
-rw-r--r-- 3 ubuntu supergroup 42 2013-12-19 10:05 hdfs:///hbase/hbase.id
-rw-r--r-- 3 ubuntu supergroup 7 2013-12-19 10:05 hdfs:///hbase/hbase.version
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 06:43 hdfs:///hbase/oldWALs

# Then we try to put a data into table1
hbase(main):023:0> put 'table1','r1','f1:1','value1'
0 row(s) in 0.0880 seconds

# Go to check hadoop file system again
# Yes!! There is one folder named 'coprocessor' was created.
ubuntu@ip-10-232-158-223:~$ hadoop fs -ls hdfs:///hbase
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 06:34 hdfs:///hbase/.tmp
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 06:32 hdfs:///hbase/WALs
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 08:26 hdfs:///hbase/archive
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 08:35 hdfs:///hbase/coprocessor
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 06:32 hdfs:///hbase/corrupt
drwxr-xr-x - ubuntu supergroup 0 2013-12-19 10:05 hdfs:///hbase/data
-rw-r--r-- 3 ubuntu supergroup 99428 2013-12-26 05:00 hdfs:///hbase/hbase-examples-v2.jar
-rw-r--r-- 3 ubuntu supergroup 42 2013-12-19 10:05 hdfs:///hbase/hbase.id
-rw-r--r-- 3 ubuntu supergroup 7 2013-12-19 10:05 hdfs:///hbase/hbase.version
drwxr-xr-x - ubuntu supergroup 0 2013-12-26 08:32 hdfs:///hbase/oldWALs

# Now we test our preGet() function.
# Check the data we have using shell command scan
hbase(main):025:0> scan 'table1'
ROW COLUMN+CELL
r1 column=f1:1, timestamp=1388046967610, value=value1
r2 column=f1:2, timestamp=1388047108045, value=value2
2 row(s) in 0.1070 seconds

# Now we get the data which with row named 'r1', it should add an extra keyvalue in the result
hbase(main):027:0> get 'table1','r1'
COLUMN CELL
r1:r1 timestamp=9223372036854775807, value=\x00\x00\x01C.\x0E\xF4\x01
f1:1 timestamp=1388046967610, value=value1
3 row(s) in 0.0150 seconds

# Try to get data which with row named 'r2', it should only get the data with row named 'r2'
hbase(main):028:0> get 'table1','r2'
COLUMN CELL
f1:2 timestamp=1388047108045, value=value2
1 row(s) in 0.0090 seconds


 


Endpoints
To be continued.......

endpoints  

lbysdc 發表在 痞客邦 留言(1) 人氣()