We always facing a problem when building SPA apps with react & react router between diff route paths ——
list page and detail pages need to share some data:
we can just solve it with url params just in path, query, or search just like below
123456789101112
// path params<Routepath="querylist/:taskId"component={QueryList}/>// and we can get the params like thisconst{taskId}=this.props.params// query & search params<Linktarget="_blank"to={{pathname:'/crawler/group/duplicate/'+${row.id},query:{taskId:12345}}}>duplicatelist</Link><Linktarget="_blank"to={{pathname:'/crawler/group/duplicate/'+${row.id},search:querystring.stringify({taskId:12345})}}>duplicatelist</Link>const{query,search}=this.props.location
but the disadvantage is obviously:
1.the url is just ugly
2.we should concern about the character set in url, it may exist the reality that browser not support
such params in url
solutions
redux solution
actually we just want a centralized state manage solution, and we can just solve it with redux.
But actually I don’t like solution like this, if I suggest solution like this, there’s no this post.
2 reasons below:
1.redux can be cumbersome with some app state like fetching data, submiting data
2.there’s much template code(repeat code) using redux
so I quit redux
location state
actually react-router location support custom state passing with paths, we can just handle like this
12345
this.props.router.push({pathname:'/crawler/normalize/'+taskId,state:Object.assign({taskName:'123'},this.props.location.state)})// and can accessconststate=this.props.location.state
Long time no articles.
Last one year working on html5 apps, hybrid apps, using backbone. Faced some problems with backbone, I thought more about frontend.
several problems below with backbone
1. backbone model is weak and error-prone in complex projects
when model is listened in many views like the accountModel, you can’t imagine what things will happen, it’s terrible.
2. missing data binding
this leads to every change should update view manual, it’s too disgusting.
3. hard to test
backbone’s event driven makes developer manipulate DOMs themselves,It’s hard to test.
4. code reuse
code can’t reuse effictively because of manipulating DOMs
Compared to react and the idea of flux, a new frontend technology stack comes.
React is an awesome MVC View framework, and the flux programming idea is also greatly decoupling business which the above first problem can be solved.
react router using history, the entry just bind redux store and react router, using history to enable router, using Provider to bind store with react
123456789101112131415161718192021222324252627
import{Provider}from'react-redux'import{Router,Route}from'react-router'import{createHistory}from'history'importAppfrom'./containers/App'importBuyfrom'./containers/Buy'importconfigurefrom'./store'conststore=configure()consthistory=createHistory()syncReduxAndRouter(history,store)//if you just want route with hash, just//<Router location="hash"></Router>ReactDOM.render(<Providerstore={store}><Routerhistory={history}><Routepath="/"component={App}></Route><Routepath="/buy"component={Buy}></Route></Router></Provider>,document.getElementById('root'))
We can see how to generate redux store below and bind with react.
2. redux apply http request middleware and create store
Redux createStore can only handle synchronize action like the todo app, we using redux middleware to handle async action like ajax or some what.
12345678910111213
import{createStore,compose,applyMiddleware,combineReducers}from'redux'importrootReducerfrom'../reducers'importapiMiddlewarefrom'../middlewares/apiMiddleware'constcreate=window.devToolsExtension?window.devToolsExtension()(createStore):createStoreconstfinalCreateStore=compose(applyMiddleware(apiMiddleware))(create)// const store = create(rootReducer, initialState)conststore=finalCreateStore(rootReducer)
here we create redux store, and bind to react, then how can we bind the redux state and action to react, look below, using redux connect and bindActionCreators
3. bind redux state and actions with react this.props
we can bind redux state the react components or just html5 pages- mapStateToProps, developers can dispatch actions or reading data from this.props, and the parent props can pass sub data using this.props
Reducers only describe the data transfer process which how state A transfers to state B, and then the react components re-renders the view.
Sync actions just like so, then reducers can change the state immediately
async actions generally has there types [load, success, error] correspond to the actions in middlewares/apiMiddleware.jsconst [PENDING, FULFILLED, REJECTED] = action.types,
the the middleware do request using superagent or some other http request library you like, the callback can dispatch actions afterwards, the next is just redux dispatch, above in the containers/Buy/index.js we bind dispatch to the async action creator bindActionCreators(Coins, dispatch)
With password remembered, the password must be encrypted in some way and stored in the cookie, actually there’s 3 cookies in header when user browser request for certification.
$.cookie('cn')- username
$.cookie('ct')- time stamp last login
$.cookie('ctoken')- encrypted password or some other info required to be certificated by server
Then we know ctoken is important, there’s problem- how to ensure the password is not cracked by hackers, below is a way to do some calculation like or with two strings.
Think about this: if we just simply encrypt the password with MD5, sha1… and store in browser, is that safe? Maybe, but the most keys are cracked with md5, or sha1…, so we should encrypt it with some more complicate way:
1234567891011
varuser=$.trim($("#username").val());varpass=$.trim($("#password").val());varsk=newDate().getTime().toString();$.cookie('cn',user,{expires:7,path:'/'});$.cookie('ck',sk,{expires:7,path:'/'});varphash=CryptoJS.MD5(pass).toString();varcthash=CryptoJS.MD5(user+sk+phash).toString();vartoken=xorString(phash,cthash);$.cookie('token',token,{expires:7,path:'/'});//then request the server for validation
what’s xorString, it creates a token for server validation. The xorString gets two params - phash which is the password encrypted in MD5, another is cthash which is the username + timestamp + MD5(password) encrypted in MD5. We can see the xorString is handling the two strings in a special way. What will it?
It’s easy, just a or operation with each MD5 result, it’s an MD5 result again, but not the result of password. And although others know the token is some or result. It’s nearly impossible for them to crack it.
Of course, we can calculate the right token easily in the server side. Here’s a version of C#:
123456789101112131415161718192021
privatestaticstringStringXor(stringstr,byte[]key){for(vari=0;i<key.Length;i++){key[i]=(byte)(Convert.ToByte(str.Substring(i*2,2),16)^key[i]);}returnBitConverter.ToString(key).Replace("-","").ToLower();}privateboolauth(){stringcn="";stringct="";stringtoken="";stringpassword=""//select from the dbbyte[]cctoken;using(varmd5=MD5.Create()){cctoken=md5.ComputeHash(Encoding.ASCII.GetBytes(cn+ct+password));}returntoken.Equals(StringXor(password,cctoken))?true:false;}
We can store the ‘password’ safely in the browser.
It’s been a long time when my last blog. This time I’m going to start an open source project called node-crawler. Let’s get straight to the point, the project includes three parts:
1.1 Admin Dashboard
It’s a distribute crawling service, the admin can check the crawler status, manage the crawling data, control the crawler client, manage the client config and deliver the client to different servers through the platform. Or some functions like data analyze or data statics. (This backend mainly developed with angularJs ).
1.2 Backend Service
Nodejs backend service which is responsible for collecting the uploaded data. Mainly include some functions like filter the data, analyze the crawling result and store into the database.
1.3 Crawler Client
The client is responsible for crawling the pages, execute scripts, control spider amount,. Think that the backend service send some configs, the client read it and do the crawling job the server send it.
Actually the client should be smart to avoid preventing by the crawling target server, It should have a lot of strategies like changing the ip, changing the agent, limit the spider speed..and so on.
The server send message to check the client and start the client, send config like to start the client:
12345
varclientConfig={speed:100,//100/s to access the target websiteworkers:10,//send 10 workers together crawling the website//some other configs}
the client start to crawling work as the config:
123456
varcrawlerConfig={target:"http://www.taobao.com/p/xxxxx",//taget crawling websiteelement:'',//target elementattr:'',//the real data to grab//some other configs about the target web page}
This blog is just an introduction of the crawling system. Other blogs will introduce how to implement it! Welcome!
Scope上下文,即是在执行语句时候能够访问到的所有变量的集合,Javascript是允许inner function存在的,因而其Scope就是一个链表形式的存在,其最顶端就是window(浏览器环境),然后每定义一个function,就定义一个Scope Object,保存一个outer reference,以访问上一级变量集合,形成一个层级式的链,处于最底部的function会一层一层向上找变量,这也是为什么不要使用t = 'without var is global var',会大大降低执行效率。附上ECMA-262
It’s been a long time since my last wordpress blog dropped down. Thanks the terrible US VPS I’v been always using, it just dropped down before I transferred my important backups.
Put it aside, I didn’t want to write blogs for a long time since the accident. It’s popular setting a blog using github pages & jekyll because of github’s free static pages hosting service and powerful