tongsampah

Hack by the Beach in Jakarta

Posted on

On the past weekend, we participated in the BCA Finhacks 2016 Hackathon. It was a hackathon event at the Segarra in Jakarta Ancol beach.
There atmosphere was very very hot and humid at the day time, but they prepared for a free flow food and drinks for every participants.

Anyway.. we were creating an App called AngpaO. It’s basically an app similar to eventbrite, but 2 extra main feature which is “live streaming” of the event and a “donation” feature using BCA E-Wallet integration. If you would like to know more about the app we have our presentation slide deck uploaded here.

Our team were called “Opcode” and consisted by 3 people:
Ihsan Fauzi Rahman (Cermati.com)
Firman Gautama (ADSKOM)
Supardi (Lippo X)

What our team have done in 24 hours time frame.

1. Reverse Engineered the PHP BCA Finhacks SDK.
– Extract bca (composer phar) file into raw php.
– Implement code hook to cache ‘access_token’ from Oauth2 class from bca sdk.
— Why? Because on the briefing yesterday morning (the 1st day), we only have limit to 5 access_token request/min, and the bca sdk didn’t cache it. We were worry that during the development we could be throttled.

2. Created PHP http Wrapper for BCA API.
– Our main apps is written in NodeJS, at the present time, bca only provide sdk for php and java, so that’s why we created an internal http wrapper for our NodeJS apps.
– We implemented these following bca api on our php http wrapper:
— User Registration
— User Update
— Topup
— Payment
— History Transaction

3. Created NodeJS Frontend + API for our Web and Android App.

– We implemented these following features:
— login & register for web (+integrated with facebook account)
— user profile update
— implemented feature to create new AngpaO event.
— implemented feature to list events for other users/guest.
— implemented feature of “donation” history for event owner.
– for this, we didn’t just use bca api, because the bca api have limitation up to the last 10 transaction only. So we re-implement our own transaction history.

4. Created Android Apps (Native)

– We created a smartphone as AngpaO wallet in mind. (So every transaction should only happen via a smartphone)
– We implemented these following features:
— login & register feature from android that integrated with Facebook account
— Implemented Angpao feed events list
— Implemented Angpao event view with video live stream
— Implemented Top-up feature.
— Implemented Donate feature.
— Implemented QR code read for AngpaO event.
— Import user profile from facebook.

5. Created live streaming video+audio for our AngpaO event.
– backend: Using Wowza with RTMP & HLS protocol.
– frontend: Using jwplayer to play on the browser (we also embed this on our android apps)
– broadcaster : Using Open Broadcaster Studio and the webcam as video and audio source (for demo)

 

======
BONUS Thoughts 😀
======

1. The first version of BCA PHP SDK was broken. (On 18th April 2016 they gave everyone the SDK to download and test)
– How it broke? After we “reverse engineered the sdk” we found out that there are no getter for OAuth2Client dan JsonParser in their sdk main class loader.
(This was fixed at 23 April 2016 by BCA) (but to be prepared, we also have fixed the sdk our self, but at the 1st day of hackathon they told us if there is a new version of the sdk).
2. BCA API untuk fungsi payment-nya sempat error?
– format tanggal dibilang salah, padahal udah pake seperti yg di contoh: https://finhacks.id/api/?php#payment. (Fixed?)
–> Seems (some) of bca api server/endpoint have different timezone (GMT+5 instead of GMT+7) (how do we know this? we looked at the transaction date from bca server response)
–> We use a work around:
– Untuk payment, make sure ‘request date’ yg dikirimkan ke server < dari current datetime di server api bca, or else bakal dianggap invalid request.
– Jadinya kita sengaja mengirimkan waktu ‘request date’ yg beberapa waktu lebih lambat untuk menghindari race condition yang menyebabkan gagal nya request ‘payment’.
3. Access point wifi nya overloaded. Karena mungkin banyak-nya device peserta yang connect ke sana. Jadinya koneksi internet nya kurang stabil.
4. The bca api sandbox suddenly disappear like 12 hours before the event start 😦
5. Not every team treated equally. 
I don’t want to point out who is who, but you can ask many other people/team that participated in the event to confirm  🙂

Kenapa Uber Dipersulit Untuk Beroperasi di Indonesia Tapi Kok GoJek Ngga?

Posted on Updated on

Saat ini ada 2 startup transportasi yang sedang berkembang pesat di Indonesia, yaitu Uber dan GoJek.

Fakta nya:

  1. Mereka sama-sama penyedia startup transportasi yang beroperasi di Jakarta.
  2. Mereka sama-sama HANYA memfasilitasi pemilik kendaaran bermotor (mobil/motor) yang ingin mencari uang tambahan dengan cara mengantar penumpang dari point A ke point B.
  3. Beda nya, Uber memfasilitasi pengemudi Mobil. Sedangkan GoJek memfasilitasi pengemudi Motor.

Tapi kenapa kok Uber dipersulit di Indonesia, sedangkan GoJek tidak?

Polisi sampai menangkap dan menjebak pengemudi yang terdaftar bersama Uber karena dianggap tidak memiliki ijin beroperasi sebagai “Taksi”.

Sekarang gue tanya sama bapak2 polisi,

YANG BEROPERASI SEBAGAI TAKSI ITU SIAPA? UBER != TAKSI pak!

Terus sekarang ada GoJek, yang bisnis nya sama persis kayak Uber, tapi bedanya pake sepeda motor. Kalau Pak Polisi permasalahkan Uber menggunakan Plat Hitam, sekarang ini GoJek dan tukang Ojek lain nya juga pake plat HITAM. KENAPA KOK GAK DIPERMASALAHKAN? APALAGI KOK GAK DI DIJEBAK DAN DITANGKAP JUGA?

Ini kalau misalkan bapak polisi hanya mempersulit UBER, itu namanya diskriminasi dan tidak adil! Apakah karena Uber pake mobil dan GoJek pake motor?

Gue bukan ngebelain Uber, tapi gue merasa bapak polisi tidak adil dalam mengambil tindakan.

(Hadoop) Make Sure Your Datanode File System Have the Correct Permission!

Posted on Updated on

Kalau pernah ngalamin error seperti di bawah ini pas lagi mau jalanin MapReduce task, ini kemungkinan besar masalahnya ada pada file directory permission di salah satu datanode di-tempat MapReduce nya berjalan (via YARN).

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hive-common-0.13.1-cdh5.3.0.jar!/hive-log4j.properties
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1420709500935_0492, Tracking URL = http://**********:8088/proxy/application_1420709500935_0492/
Kill Command = /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/bin/hadoop job  -kill job_1420709500935_0492
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2015-02-03 01:05:26,016 Stage-1 map = 0%,  reduce = 0%
2015-02-03 01:05:36,674 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 6.15 sec
2015-02-03 01:05:55,424 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 11.77 sec
2015-02-03 01:06:13,084 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 15.43 sec
MapReduce Total cumulative CPU time: 15 seconds 430 msec
Ended Job = job_1420709500935_0492
Launching Job 2 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1420709500935_0493, Tracking URL = http://**********:8088/proxy/application_1420709500935_0493/
Kill Command = /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/bin/hadoop job  -kill job_1420709500935_0493
Hadoop job information for Stage-2: number of mappers: 0; number of reducers: 0
2015-02-03 01:06:29,383 Stage-2 map = 0%,  reduce = 0%
Ended Job = job_1420709500935_0493 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 15.43 sec   HDFS Read: 75107359 HDFS Write: 48814 SUCCESS
Stage-Stage-2:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 15 seconds 430 msec

Ketika kita melihat lebih jauh lagi error msg di:

 http://**********:8088/proxy/application_1420709500935_0493/
Application application_1420709500935_0493 failed 2 times due to AM Container for appattempt_1420709500935_0493_000002 exited with exitCode: -1000 due to: Not able to initialize distributed-cache directories in any of the configured local directories for user USERNAME
.Failing this attempt.. Failing the application.

Nah kalau gue, datanode root directory-nya di filesystem ada di:

/disk1/, /disk2/, /disk3/

Jadi.. supaya gue terbebas dari error msg diatas, gue harus make sure directories ini mempunyai permission yang cocok (yarn:yarn) (Karena MapReduce gue di manage oleh YARN, supaya default user nya bisa create file cache nya)

/disk1/yarn/nm/usercache
/disk2/yarn/nm/usercache
/disk3/yarn/nm/usercache

Cara nya gimana?

chown -R yarn:yarn /disk1/yarn/nm/usercache
chown -R yarn:yarn /disk2/yarn/nm/usercache
chown -R yarn:yarn /disk3/yarn/nm/usercache