Trump : Who vote for a Kinder kid as their leader?
Trumps grumbling words when he saw this picture: [Design dialogue]
Member since 2017-07-15T03:50:57Z. Last seen 2025-01-02T20:05:01Z.
2729 blog posts. 128 comments.
Trumps grumbling words when he saw this picture: [Design dialogue]
Hi hope it is okay.
Test
《戰狼2》劇照(互聯網) 吳京的《戰狼二》票房大收,有望突破50億人民幣。據中國傳媒分析,吳京個人有望分得六、七億。銀幕上,吳京赫然是個大英雄,把數百個中國人從外國救回中國,在銀幕上大聲公告天下,「犯我中華者,雖遠必誅!」又強調中國護照之實用性,「中國護照也許不能帶你去世界上的任何地方,但它能把你從世界上任何地方帶回中國。」 接近50億人民幣票房,意味着全中國有接近十分一的人是這部電影的捧場客。看完電影,十分一的中國人亢奮之餘並沒忘記思考,他們發現口中盛讚中國護照一級棒的吳京,其實是嫌棄它的,他本人嫌棄它,他老婆也嫌棄它。據說吳京手持香港護照,吳京的老婆手持美國護照,他們的孩子雖名叫吳所謂,他們對孩子持甚麼護照卻非常地有所謂,所以把孩子生在英國,持英國護照。網民當然不高興了,痛斥他用口愛國。 為了給吳京機會用實際行動愛國,網民呼籲他為四川地震災區捐款1億,或全部的電影收入。結果,吳京捐了100萬。 據說吳京此舉令網民非常不高興,罵他虛偽,口中說愛國,結果,錢不捨得捐,更一早扔了中國護照。這算甚麼愛國呢?
有些事情就是這樣發生了,而且無法回頭。 像林子健:一釘未平一釘又起;民主黨:一鋪清袋;受政治鎮壓的年輕人:一籌莫展;香港:一國一制、一瀉千里、一蹶不振、一言難盡、一息奄奄……;Celine:一夜成名;Celine爸及其經理人公司及宣傳部負責人及集團主席:公關災難,一子錯,滿盤皆落索。 9歲小巨肺與星爸赴美,機場職員只是與同事對講機一句戲言:「係咪借個女上位嗰個?」,就被保安公司解僱。事後才認為投訴「嚴重咗」──講事實也被炒?是因為負責人認為他「在一個小朋友面前羞辱其父親」,上綱上線,很難補鑊,希望保安公司重新考慮重僱該職員?你是誰?誰會聽?就這樣,打爛人家飯碗了。雖然所謂「壞人衣食猶如殺人父母」,言重了,東家不打打西家吧,但小事化大,小錯成罪,小題大做,本來大家已有些反感了,現在更加趕客。 聽說小小搖錢樹將舉行演唱會,其實沒發生機場事件,我們也沒打算一看,因為時間寶貴,更怕到時星爸又父憑女貴,借女上位,跑上台獻藝,自我感覺良好。觀眾一想到有人為此丟了工作,而他們就賺錢開顏荷包腫脹,還看得下去嗎?
Hi, this is Ajay and Alex, and we’re the founders of Plasticity (https://www.plasticity.ai/). We're building an API that helps developers create human-like natural language interfaces. Four years ago, we hacked 3rd party commands into Siri without jailbreaking before Alexa Skills or SiriKit were released (https://www.wired.com/2014/04/googolplex/). It was the first App Store for voice commands. Since then, we’ve worked on NL interfaces at Google and Apple Siri. Now we're tackling the next problem: products using NLP are fairly simplistic in what they can do for users. For example, systems like Siri still struggle to directly answer a basic question like "When is the Y Combinator application due?" because it can't understand and reason where the answer may lie in a sentence on Y Combinator's website.
We’re approaching the problem differently by understanding the structure of language and relationships within text, instead of relying on more simplistic methods like keyword matching. We build a graph of entities and their relationships within a sentence along with other linguistic information. You can think of it as “Open Information Extraction” with a lot more information (https://www.plasticity.ai/api/demo).
Currently, we use a TensorFlow model to perform classical tasks like parts of speech, tokenization, and syntax dependency trees. We built our own Wikipedia crawler for data to better handle chunking and disambiguation, which helps return more accurate results for multi-word entities in sentences like: "The band played let it be by the beatles." We wrote our open IE algorithms from scratch, focusing on speed. It's written completely in C++ and we are adding more features everyday.
Our public APIs are in beta right now, we’re constantly working to improve the accuracy, and we’re looking forward to hearing feedback. We’d love to hear what the HN community is working on with NLP and how we can help!
elil17 4 hrs I'm impressed with Cortex - All industry leaders (Google, Siri, Alexa) answer "Who killed John Wilkes Booth" with "Abraham Lincoln," but this gives the correct answer. It shows that it has a deeper understanding of it's data sources.
visarga 3 hrs I asked "What is taller, a dog or a giraffe?" and it didn't know. Common sense is not yet in the knowledge graph. Maybe it can't perform comparisons
Also: "What is the largest city in Europe?" -> "New York City".
"What is the largest city in the world?" -> "Gotham City"
So it seems to make KB lookup errors and probably can't do logic/set operations.
acsands13 3 hrs Correct, we can't do logic/set operations yet, but we can handle some graph traversal questions where the answer is the property of a n-off related entity like: (1) "Who is Arya Stark's father's wife?" or (2) "Mark Zuckerberg's wife's birthday"
ORioN63 2 hrs I also tried things like:
How old is the French Prime-Minister? How old is the Portuguese President?
President always defaults to Trump and Prime-Minister to May (May also responds with two different results even though it show the same text(/source?). Also in Sapien "Prime-Minister" wasn't recognized.
I'm very excited about technologies like this one.
patelajay285 2 hrs Yes, this is definitely a class of questions we don't do right now, but have updates coming for soon!
Good catch on "Prime-Minister", we will patch that.
patelajay285 3 hrs Right now we think of Cortex as a competitor to Google KnowledgeGraph and WolframAlpha, rather than a common sense knowledge graph. But, we hope to answer questions like that one day :)
patelajay285 3 hrs Thanks for pointing this out - we didn't know about this case and it's cool to see Cortex can answer it correctly! For data sources right now, we use Wikipedia for Cortex but we're planning to add additional ones soon to handle more questions (e.g. questions around movies, restaurants, etc.).
There are definitely some questions (e.g. earth age) that we aren't as good at right now, but we're improving those!
stevenschmatz 3 hrs It's definitely impressive. However it still fails at questions like "How old is the Earth".
patelajay285 3 hrs Thanks for checking it out! There are definitely question domains that need work and some overfitting problems but we wanted to get this out to HN community early and see what they thought.
vijayr 3 hrs Pretty cool. It answered 'how old is the president' correctly, but got confused with 'how old is the vice president' and gave president Bush's age.
Fun to play with!
bpicolo 3 hrs
What is the most dangerous bear?
Winnie - The - Pooh
patelajay285 3 hrs It scared us as kids :)
jtraffic 3 hrs Something I'll keep my eye on, for sure. In the meantime:
It feels like you've reinvented much by writing stuff from scratch. spaCy is fast, has tons of features, commonly updated, free, trained on the Common Crawl corpus. Why not just use that? I'm only curious, not critical.
patelajay285 3 hrs Thanks!
Fair question, we think spaCy is great, but it just made a lot of sense for us to start on the basics so that we could modify things as needed. For example, our tokenization algorithm and syntax dependency tree algorithm treats "let it be" in "The band played let it be by the beatles." as a single chunk to return a more accurate syntax dependency tree, which Google Cloud NL and spaCy don't do out of the box today.
zitterbewegung 2 hrs This is really cool. Website design is killer and looks beautiful. I tried "Who married the 51st president?" which didn't work but when I tried "Who married Barack Obama?" it responded correctly.
I then tried "Who married the president?" and got the correct responses also.
The only thing I would change is at the bottom of the Plasticity demo you should have a big sign up button. And a link to your documentation.
bobbylox 2 hrs Obama was the 44th (really 43rd if you count by people instead of presidencies) President.
zitterbewegung 1 hr Those queries don't work either.
patelajay285 1 hr You're right, it's on the roadmap for Cortex along with: 1) ordered queries ("Who is the 44th president?") 2) comparison queries ("Is Bill Gates older than Steve Ballmer?") 3) simple logic queries (AND/OR) 4) reducing overfitting (the system's tendency to respond with any answer even though it may not have an accurate one)
patelajay285 2 hrs Thanks! That's good feedback on the layout, we're changing it now!
zitterbewegung 1 hr Also I would make the ability to do custom queries on the Cortex demo more prevalant (maybe a custom button?).
patelajay285 1 hr Makes sense, we haven't really optimized the ease-of-use of our demos / documentation yet, but are going to work on that soon.
gurut 1 hr What would a good non-commercial use case of this product be like? Would it help simplify/understand Terms & Conditions better? Text summarization?
patelajay285 1 hr Great question!
Text simplification and summarization are great places this technology can be deployed for non-commercial usage. One example is https://newsela.com which provides articles on many different subjects at various reading levels for kids in school. For example, you can adjust the reading level on an article like this:
https://newsela.com/read/lib-convo-europe-invasion-dna/id/33...
Currently, this process is manual. But, our APIs could be used to help automate things like this in the near future. Quick reminder that our APIs are free for open-source or educational purposes. So, if anyone's interested in giving this a go for a hackathon project, you can e-mail me at ajay@plasticity.ai
fiatjaf 3 hrs "We're make sense of dark data to help companies in technology, law, medicine, and government extract information from text."
Ignore the grammar error, you're helping government extract information from text? Where exactly? Do you mean the NSA? Do you mean helping the government look at public internet written commentary to track citizens?
patelajay285 3 hrs Thanks for catching that!
We don't do anything like that, in fact, we don't work with the government at all right now. We know that there is a huge application of this technology in the government beyond the Department of Defense. For example, large corpuses of text data other government agencies might need to process like the Census Bureau, the IRS, etc.
fiatjaf 1 hr That's evil. I'll hate you if you help the IRS.
ajeet_dhaliwal 3 hrs Can the lingua component of this (when it is available) be used to answer questions from my own text corpus?
acsands13 3 hrs Answering questions from your own text corpus will soon be part of Cortex! It's actually the next thing we are working on. If you'd like, we can let you know when it's ready. Just send me a message at alex@plasticityai.com with your email and we'll reach out.
joering2 3 hrs Cool. But i wonder what is a use-case for such technology. What kind of market do you target?
patelajay285 3 hrs A lot of the comments on this thread are about our Cortex Knowledge Graph API, but we actually think of the Sapien Language Engine API as our main product.
We think being able to understand the semantic meaning behind language through our graph of relationships and entities in a sentence are going to be critical in building more robust conversational interfaces. So companies we are talking to now include companies who want to use it for natural language search or messaging apps.
In this blog post, I would like to introduce the JavaScript Binary AST, an ongoing project that we hope will help make webpages load faster, along with a number of other benefits.
A little background
Over the years, JavaScript has grown from one of the slowest scripting languages available to a high-performance powerhouse, fast enough that it can run desktop, server, mobile and even embedded applications, whether through web browsers or other environments.
As the power of JavaScript has grown, so has the complexity of applications and their size. Whereas, twenty years ago, few websites used more than a few Kb of JavaScript, many websites and non-web applications now need to deliver and load several Mb of JavaScript before the user can start actually using the site/app.
While the sound of “several Mb of JavaScript” may sound odd, recall that a native application such as Steam weighs 3.1Mb (pure binary, without resources, without debugging symbols, without dynamic dependencies, measured on my Mac), Telegram weights 11Mb and the Opera updater weighs 5.8Mb. I’m not adding the size of a web browser, because web browsers are architected essentially from dynamic dependencies, but I expect that both Firefox and Chromium weigh 100+ Mb.
Of course, large JavaScript source code has several costs, including:
heavy network transfers; slow startup. We have reached a stage at which the simple duration of parsing the JavaScript source code of a large web application such as Facebook can easily last 500ms-800ms on a fast computer – that’s before the JavaScript code can be compiled to bytecode and/or interpreted. There is very little reason to believe that JavaScript applications will get smaller with time.
So, a joint team from Mozilla and Facebook decided to get started working on a novel mechanism that we believe can dramatically improve the speed at which an application can start executing its JavaScript: the Binary AST.
Introducing the Binary AST
The idea of the JavaScript Binary AST is simple: instead of sending text source code, what could we improve by sending binary source code?
Let me clarify: the Binary AST source code is equivalent to the text source code. It is not a new programming language, or a subset of JavaScript, or a superset of JavaScript, it is JavaScript. It is not a bytecode, rather a binary representation of the source code. If you prefer, this Binary AST representation is a form of source compression, designed specifically for JavaScript, and optimized to improve parsing speed. We are also building a decoder that provides a perfectly readable, well-formatted, source code. For the moment, the format does not maintain comments, but there is a proposal to allow comments to be maintained.
Producing a Binary AST file will require a build step and we hope that, in time, build tools such as WebPack or Babel will be able to produce Binary AST files, hence making switching to Binary AST as simple as passing a flag to the build chains already used by many JS developers.
I plan to detail the Binary AST, our benchmarks and our current status it in future blog posts. For the moment, let me just mention that early experiments suggest that we can both obtain very good source compression and considerable parsing speedups.
We have been working on Binary AST for a few months now and the project was just accepted as a Stage 1 Proposal at at ECMA TC-39. This is encouraging, but it will take time until you see implemented in all JavaScript VMs and toolchains.
Comparing with…
Most webservers already send JavaScript data using a compression format such as gzip or brotli. This considerably reduces the time spent waiting for the data.
What we’re doing here is a format specifically designed for JavaScript. Indeed, our early prototype uses gzip internally, among many other tricks, and has two main advantages:
it is designed to make parsing much faster; according to early experiments, we beat gzip or brotli by a large margin. Note that our main objective is to make parsing faster, so in the future, if we need to choose between file size and parsing speed, we are most likely to pick faster parsing. Also, the compression formats used internally may change.
…minifiers
The tool traditionally used by web developers to decrease the size of JS files is the minifier, such as UglifyJS or Google’s Closure Compiler.
Minifiers typically remove unused whitespace and comments, rewrite variable names to shorten then, and use a number of other transformations to make the program shorter.
While these tools are definitely useful, they have two main shortcomings:
they do not attempt to make parsing faster – indeed, we have witnessed a number of cases in which minification accidentally makes parsing slower; they have the side-effect of making the JavaScript code much harder to read, including renaming unreadable names to variables and functions, using exotic features to pack variable declarations, etc. By opposition, the Binary AST transformation:
is designed to make parsing faster; maintains the source code in such a manner that it can be easily decoded and read, with all variable names, etc. Of course, obfuscation and Binary AST transformation can be combined for applications that do not wish to keep the source code readable.
…WebAssembly
Another exciting web technology designed to improve performance in certain cases is WebAssembly (or wasm). wasm is designed to let native applications be compiled in a format that can both be transferred efficiently, parsed quickly and executed at native speed by the JavaScript VM.
By design, however, wasm is limited to native code, so it doesn’t work with JavaScript out of the box.
I am not aware of any project that achieves compilation of JavaScript to wasm. While this would certainly be feasible, this would be a rather risky undertaking, as this would involve developing a compiler that is at least as complex as a new JavaScript VM, while making sure that it is still compatible with JavaScript (which is both a very tricky language and a language whose specifications are clarified or extended at least once per year). Of course, this task ends up useless if the resulting code is slower than today’s JavaScript VMs (which tend to be really, really fast) or so large that it makes startup prohibitively slow (because that’s the problem we are trying to solve here) or if it doesn’t work with existing JavaScript libraries or (for browser applications) the DOM.
Now, exploring this would definitely be an interesting work, so if anybody wants to prove us wrong, by all means, please do it :)
…improving caching
When JavaScript code is downloaded by a browser, it is stored in the browser’s cache, so as to avoid having to re-download it later. Both Chromium and Firefox have recently improved their browsers to be able to cache not just the JavaScript source code but also the bytecode, hence side-stepping nicely the issue of parse time for the second load of a page. I have no idea of the status of Safari or Edge on the topic, so it is possible that they may have comparable technologies.
Congratulation to both teams, these technologies are great! Indeed, they nicely improve the performance of reloading a page. This works very well for pages that have not updated their JavaScript code since the last time they were accessed.
The problem we are attempting to solve with Binary AST is different: while we all have some pages that we visit and revisit often, there is a larger number of pages that we visit for the first time, in addition to the pages that we revisit but that that have been updated since our latest visit. In particular, a growing number of applications get updated very, very often – for instance, Facebook ships new JavaScript code several times per day, and I would be surprised if Twitter, LinkedIn, Google Docs et al didn’t follow similar practices. Also, if you are a JS developer shipping a JavaScript application – whether web or otherwise – you want the first contact between you and your users to be as smooth as possible, which means that you want the first load (or first load since update) to be very fast, too.
These are problems that we address with Binary AST.
What if…
Additional technologies have been discussed to let browsers prefetch and precompile JS code to bytecode.
These technologies are definitely worth investigating and would also help with some of the scenarios for which we are developing Binary AST – each technology improving the other. In particular, the better resource-efficiency of Binary AST would thus help limit the resource waste when such technologies are misused, while also improving the cases in which these techniques cannot be used at all.
…we used an existing JS bytecode?
Most, if not all, JavaScript Virtual Machines already use an internal representation of code as JS bytecode. I seem to remember that at least Microsoft’s Virtual Machine supports shipping JavaScript bytecode for privileged application.
So, one could imagine browser vendors exposing their bytecode and letting all JS applications ship bytecode. This, however, sounds like a pretty bad idea, for several reasons.
The first one affects VM developers. Once you have exposed your internal representation of JavaScript, you are doomed to maintain it. As it turns out, JavaScript bytecode changes regularly, to adapt to new versions of the language or to new optimizations. Forcing a VM to keep compatibility with an old version of its bytecode forever would be a maintenance and/or performance disaster, so I doubt that any browser/VM vendor will want to commit to this, except perhaps in a very limited setting.
The second affects JS developers. Having several bytecodes would mean maintaining and shipping several binaries – possibly several dozens if you want to fine-time optimizations to successive versions of each browser’s bytecode. To make things worse, these bytecodes will have different semantics, leading to JS code compiled with different semantics. While this is in the realm of the possible – after all, mobile and native developers do this all the time – this would be a clear regression upon the current JS landscape.
…we had a standard JS bytecode?
So what if the JavaScript VM vendors decided to come up with a novel bytecode format, possibly as an extension of WebAssembly, but designed specifically for JavaScript?
Just to be clear: I have heard people regretting that such a format did not exist but I am not aware of anybody actively working on this.
One of the reasons people have not done this yet is that designing and maintaining bytecode for a language that changes all the time is quite complicated – doubly so for a language that is already as complex as JavaScript. More importantly, keeping the interpreted-JavaScript and the bytecode-JavaScript in touch would most likely be a losing battle, one that would eventually result in two subtly incompatible JavaScript languages, something that would deeply hurt the web.
Also, whether such a bytecode would actually help code size and performance, remains to be demonstrated.
…we just made the parser faster?
Wouldn’t it be nice if we could just make the parser faster? Unfortunately, while JS parsers have improved considerably, we are long past the point of diminishing returns.
Let me quote a few steps that simply cannot be skipped or made infinitely efficient:
dealing with exotic encodings, Unicode byte order marks and other niceties; finding out if this / character is a division operator, the start of a comment or a regular expression; finding out if this ( character starts an expression, a list of arguments for a function call, a list of arguments for an arrow function, …; finding out where this string (respectively string template, array, function, …) stops, which depends on all the disambiguation issues, …; finding out whether this let a declaration is valid or whether it collides with another let a, var a or const a declaration – which may actually appear later in the source code; upon encountering a use of eval, determine which of the 4 semantics of eval to use; determining how truly local local variables are; … Ideally, VM developers would like to be able to parallelize parsing and/or delay it until we know for sure that the code we parse is actually used. Indeed, most recent VMs implement these strategies. Sadly, the numerous token ambiguities in the JavaScript syntax considerably the opportunities for concurrency while the constraints on when syntax errors must be thrown considerably limit the opportunities for lazy parsing.
In either case, the VM needs to perform an expensive pre-parse step that can often backfire into being slower than regular parsing, typically when applied to minified code.
Indeed, the Binary AST proposal was designed to overcome the performance limitations imposed by the syntax and semantics of text source JavaScript.
What now?
We are posting this blog entry early because we want you, web developers, tooling developers to be in the loop as early as possible. So far, the feedback we have gathered from both groups is pretty good, and we are looking forward to working closely with both communities.
We have completed an early prototype for benchmarking purposes (so, not really usable) and are working on an advanced prototype, both for the tooling and for Firefox, but we are still a few months away from something useful.
I will try and post more details in a few weeks time.
For more reading:
Next month e-commerce will change forever thanks to Amazon. September 12 marks 20 years since Amazon filed for their 1-Click patent. This means that the patent will expire and the technology behind it will be free to be used by any e-commerce site. Starting next month more and more sites will be offering a one click checkout experience. Most major sites have already started development with plans to launch soon after the patent expires.
History behind the patent
Amazon applied for the 1-Click patent in September of 1997, the actual patent was granted in 1999. The whole idea behind the patent is when you store a user’s credit card and address you only need a single click to order a product. For the last 20 years Amazon has kept a tight hold on this technology, they have only licensed it to one company Apple. No one knows what Apple paid to license the technology, but the value of the patent has been assessed at 2.4 billion dollars by sources. Over the last 20 years Amazon has defended the validity of the patent in several cases, even having to revise the patent at one point. But, now the wait is almost over and this technology is about to make it into the open market.
Not a one page checkout
The one click checkout is not to be confused with a one page checkout. With a one page checkout all of the account, checkout, and payment information is on one page. With a one click checkout a user is sent straight from the product (or category) page to the order confirmation page. No clicking through any steps or accepting any charges, one click from a product page and an order is placed. The user will land directly on the order confirmation page. Order placed, once click and done.
Merchants listen up
If you are a merchant, this can be a huge opportunity for you. With the holiday season right around the corner who does not want to offer their customers a quicker, easier way to checkout? You can reduce the friction of going through a whole checkout process down to just one button press from a product page. Look at the image below, pressing the buy now button will take a user directly to an order confirmation page and charge their payment method.
thirty bees buy now
Not all credit card processors have the technology to support a one click checkout system. Some that we know that have the technology are:
Stripe Authorize.net First Data Paypal Pro Skybank These are the ones we have worked with in the past that we know use a card vault. Others likely support it too, so if you know another processor that uses a card vault let us know. The card vault is the key to the frictionless payment. Customers store their card to use it later, that is one of the keys to the one click checkout process.
How serious is this?
It is serious enough that the World Wide Web Consortium (W3C) has started writing a draft proposal for one click buying methods. They have recruited some of the top companies in the industry like Google, Apple, and Facebook to help come up with a set of specifications. Google has already implemented some of the standards in its Chrome and Chrome Mobile browsers, with more likely to come in the future. They have proposed ways of storing cards and address data in the browser and letting the browser communicate directly with your payment gateway to send the card or bank information. Sounds pretty useful doesn’t it?
What are we doing?
We realize that is technology is important to our merchants. This is something that will change e-commerce in a major way over the next year. We have already started on a framework to extend the thirty bees 1.0.x branch to allow for single click buying. We are developing a module that will allow payment modules to hook into it, so that developers can extend their payment modules to work with a single click buying. We are going to develop several of these modules in house, such as the Stripe module and a couple of other modules. We are also going to release a couple tutorials on how to hook into the single click checkout module, so that developers will be able to easily update their modules to support the new system.
" I see the future as running a docker file on your laptop, running it through a ci system, then pushing it to a container service. Docker is the clear standard." Where I work this has been the standard for over a year now. reply
samstave 1 hour ago [-]
Would you mind just expanding on exactly what your ci pipeline/stack looks like which your company made its standard? reply
nawitus 41 minutes ago [-]
Others have discussed how docker uses a layered approach, and how two containers that share a base system will share most of the filesystem and memory. The real power of containers comes with
By leveraging containers, container orchestration systems can provide high availability, scalability, and zero-downtime rollouts and rollbacks, among many other things. These things were hard before containers & container orchestration. By allowing containers to be moved between nodes in a cluster, one generally achieve higher hardware utilization than with VMs alone (which is in itself a big improvement upon software on bare-metal hardware). All of this also leads to easier/better continuous deployment, as well. This, in turn, leads to easier testing, and greatly simplifies provisioning of hardware for new projects.
Trumps grumbling words when he saw this picture: [Design dialogue]
北韓人民軍昨天按原定計劃向領導人金正恩呈上射擊關島近海方案,金正恩對方案表示滿意,但暫時按兵不動,繼續觀察美國態度。外界普遍認為金正恩立場軟化,但分析提醒不宜過於樂觀,事關美韓下周聯合軍演隨時再令局勢升溫;北韓據報亦正移動導彈發射台,足以在24至48小時內發射遠程彈道導彈。
朝中社發佈照片指金正恩(左二)到朝鮮人民軍戰略司令部,聽取導彈射擊關島近海方案。 路透社 北韓早在上周披露,將以四枚「火星12型」遠程彈道導彈飛越日本上空射向關島近海。官媒朝中社昨天報道,金正恩早一日在人民軍總政治局局長黃炳誓及勞動黨中央委員會副部長金正植等人陪同下,到人民軍戰略司令部聽取相關方案,是半個月以來首次公開亮相,打破外界指他躲起來準備真的開戰的猜測。 司令部有螢幕顯示關島安德森空軍基地衞星圖片,軍方代表亦用地圖指示導彈飛行路線。金正恩滿意匯報內容,在決定是否付諸行動前「會再觀察洋基人(Yankees,意即美國佬)的愚蠢及笨蛋行徑」,若美方「在朝鮮半島維持極度危險鹵莽的舉動」,將「按已公佈的作重大決定……為紓緩緊張及避免半島爆發危險軍事衝突,美國有必要先行作出正確選擇」。另外,金正恩指當前局勢不適合談論北韓何時釋放三名美國公民的問題。
【關島】 教會周日照常舉行彌撒,大批信眾前往祈禱。 路透社 五角大樓指北韓移動發射台
按兵不動為局勢緩和露出曙光,分析相信是華府多位高官上周末為總統特朗普的「烈燄與怒火」惹火言論「補鑊」,成功撲火令金正恩軟化,特別是國務卿蒂勒森及國防部長馬蒂斯周日罕有在《華爾街日報》發表聯名評論,強調美國並非尋求平壤政權更迭,而如果北韓停止挑釁局勢是有商量餘地。南韓延世大學北韓專家魯樂漢(John Delury)指:「高官傳達的訊息為局勢降溫,應表揚他們……多久沒見過國務卿與國防部長聯名發評論了?」 不過美國哥倫比亞大學政治科學教授諾爾波(Stephen Noerper)提醒「不宜過份樂觀,半島局勢向來能極速緊張起來」,加上美國與南韓下周一起舉行年度「乙支自由衞士」聯合軍演,「能令局勢急劇升溫」。 美國有線新聞網(CNN)就引述五角大樓消息報道,美軍偵察衞星留意到北韓正移動導彈發射台,消息拒絕透露發射台是否裝有導彈,亦不肯定部署是否跟射擊關島方案有關,但部署反映北韓只要金正恩一聲令下就可在24至48小時發射遠程彈道導彈。而事實上金正恩聽取匯報後,亦要求軍方保持待命狀態,隨時進入實戰。
文在寅︰不允任何一方動武
中國外交部發言人華春瑩指半島局勢逼近危機臨界點,是作出決斷、重回和談的轉折點,繼續促請各方克制,和平解決問題。南韓總統文在寅昨天在光復72周年紀念活動講話中,強調半島不能再經歷戰爭,未經南韓同意任何一方都不得在半島動武。日本首相安倍晉三跟特朗普舉行電話會談,雙方同意處理半島危機首務,是竭盡全力阻止北韓射導彈。 韓聯社/法新社/美國有線新聞網/《華爾街日報》
即時要聞 2017-08-16 00:01:39 HKT
日本菜是港人至愛,為免「回鄉探親」大出洋相,《蘋果》找來居港十年、經常協助當地旅遊局推廣的正宗日本人大塚悠介、木邨千鶴,拆解6大日本飲食文化迷思。
記者 佘錦洪
「一般在餐廳當然是用筷子夾壽司」,要將豉油沾在魚生而非飯上,「因為飯會吸收豉油,令到壽司的味道變鹹,破壞魚的原味,亦有機會令飯掉在豉油碟中,是不禮貌的做法。」
不過大塚悠介承認,近年日本越來越多人吃三文魚,「三文魚的確有好多好處,如豐富蛋白質、DHA等,但在傳統日本飲食文化中,我們是不吃三文魚的。」不過要選擇的話,他的回答相當直率,「我還是會首選吞拿魚,因為三文魚吃起來油味較重。」
吃拉麵、烏冬一定要「雪雪」聲? 不少人都認為日本人吃拉麵時「雪雪」作響,是贊賞廚師技藝的另類手法。但大塚悠介坦言只是「美麗誤會」,「因為日本麵有熱湯,當你進食同時吸入空氣,就可以避免嘴巴燙傷,亦可以吸吮到更多香料,提升味道。但當然我們吃其他食物如意粉、中國麵等都會盡量安靜。」
喝光拉麵湯底才算識食? 大塚悠介未有完全否定這個講法,「有些傳統的拉麵店會用海鮮來熬湯,可能每天都花十個小時去準備,所以廚師或會要求客人喝光湯底。有無這個習慣要看個別店家。」
木邨千鶴亦稱要視乎個人習慣而定,「如果你想喝湯是沒有問題,但有些拉麵味道比較濃郁,又鹹又油,喝湯有時候並不健康。」
與日本人吃飯要拿起碗吃,但切記不要「爬」飯? 無論華人抑或日本人,拿起飯碗吃飯是一種禮儀,大塚悠介說「以前飯碗是珍貴的東西,要捧在手上吃以示尊重;亦不可以將碗放近嘴巴,我們認為這就像狗進食的方式。比起中國米較乾身,日本米造的飯黏度較高,容易用筷子夾起,所以可以一口一口的吃。來到香港我還是習慣日本的吃飯方式,當然是比較難(夾起飯)。」
筷子必須打橫放? 大塚悠介提醒吃飯用筷子不可以用插的方法、不可與其他人同時夾同一盤菜、不可以放在碗碟上等。全因筷子形狀問題,「因為傳統日本筷子比較尖,將筷子末端指向他人被認為帶有攻擊性,是不禮貌的做法,因此正確的擺法是將筷子打橫放在自己身前。」
筷子座是家家戶戶必備的配件。但若餐廳無提供,木邨千鶴稱自己會即席利用筷子套DIY摺成筷子座,「既簡單又能提醒自己。」
大塚悠介指出,日本人自小學習餐桌禮儀,如幼稚園首先要學使用筷子,上小學後不論男女都要上6年的家政課,學習煮傳統日式菜式,同時學習各種規矩。木邨千鶴認為,雖然日本飲食文化規矩多,但出發點都是不將麻煩帶給其他人,令大家都可以享受到食物帶來的快樂。
浪花一本居食屋公關經理蕭詠芝則表示,日本人來到香港後,會適應香港飲食文化,「對禮儀冇咁嚴格,隨和同冇咁多要求」,但亦試過特別要求送上冰水,「因為日本人比較唔鍾意室溫或熱水,佢哋覺得冰水係比貴賓嘅」。為呈現最真實味道,她稱餐廳主要食材都從日本進口,但因應本地口味,如咖哩會由傳統偏甜調節為較濃味及帶刺激辛辣,壽司中的芥末醬亦會分開另上。
場地提供:浪花一本居食屋
居港十年的大塚悠介(左)及木邨千鶴(右)均指出,日本雖然規矩多,但出發點都是不將麻煩帶給其他人,令大家都可以享受到食物帶來的快樂。夏家朗攝(蘋果日報)
日本米做的飯黏度較高,較容易用筷子夾起。夏家朗攝(蘋果日報)
與日本人吃飯,拿起飯碗是一種禮儀,亦要一口一口夾起飯來吃。夏家朗攝(蘋果日報)
吃壽司正確方法是將豉油沾在魚生上。夏家朗攝(蘋果日報)
大塚悠介指,筷子正確的擺法是打橫放在自己身前。夏家朗攝(蘋果日報)
蕭詠芝稱餐廳會因應港人習慣,調節菜色的口味。夏家朗攝(蘋果日報)
Customers look at the beef steaks for sale at a Sam's Club store of Wal-Mart in Beijing, China June 29, 2017. REUTERS/Jason Lee - RTS192UR
What's the story? (Reuters/Jason Lee)
Behind every food item being sold, there’s a story to tell. In China, where food scares are common, many consumers are particularly anxious to hear it.
With that in mind, a Chinese e-commerce company has made it possible for customers to look at a detailed history of their steaks—from when the cow was born to what it was eating—before it’s served on their dinner tables. The information is being made available with the help of blockchain, a technology known for being hard to tamper with.
JD.com, China’s second-largest e-commerce platform, has been working with Kerchin, an Inner Mongolia-based beef manufacturer, since early May (link in Chinese) to use blockchain to track the production and delivery of frozen beef. People living in Beijing, Shanghai, and Guangzhou—China’s most populous cities—can now track the journey of beef ordered from JD.
Food fraud costs the global food industry some $40 billion each year, according to a 2016 report by PwC, but Chinese consumers are particularly fearful about food safety. Their confidence in domestic food products plummeted after tainted milk powder killed six infants in 2008. Today, “exposés” of fake food—not all of which are true—can spread like fire on Chinese social media platforms like WeChat and Weibo, only adding to the confusion and distrust. When problems do arise, the lack of transparency about how food is processed makes it challenging to pinpoint where in the supply chain things went wrong. Instead of being centralized, information is often made available to manufacturers, warehouses, and delivery companies separately.
The life and death of a cow
Blockchain is most frequently associated with bitcoin. To solve one of the fundamental problems of the cryptocurrency—and keep people from “double spending” their digital money—blockchain is used to publicly record every bitcoin transaction. The technology, born in 2008, creates secure copies of a ledger and provides a mechanism for various parties to check and agree on a set of facts, which, after being recorded, can’t be changed. The secure nature of the technology has led to new uses, such as tracking land ownership or tracing the origin of a steak. (The latter use is also being tested, as of early this month, by Golden Gate Meat Company in San Francisco.)
“The information cannot be falsified.” “The information cannot be falsified,” says Josh Gartner, JD’s spokesman. In their partnership, JD will be responsible for the logistics of getting the meat to customers, while Kerchin—which had about $300 million in revenue last year, 10% of it from online sales of beef products—will “ensure the authenticity of all product information,” he adds.
To design and develop its blockchain, JD adapted the architecture from Hyperledger, an open-source project that lets enterprise developers use blockchain technology in various industries.
The process to encode data to blockchain begins with Kerchin scanning barcodes to collect and store data in its own supply chain before providing it to JD, which writes the information to blockchain. After that, any changes require a digital signature, and both parties will immediately be informed of any modifications.
To understand the process, I ordered an eye-round steak, a cut from above the cow’s rear-leg region, from JD on the afternoon of July 14 in Guangzhou. Weighing 200 grams (0.44 lbs), the meat arrived the next day, delivered by a JD courier. Encased in a black box, the front had an opening showing the cut of beef while the back had a QR code and instructions for pulling up information about my food.
The Eye Round steak from Kerchin, delivered by JD, arrived on July 15, 2017 in Guangzhou. The eye-round steak from Kerchin, ordered from JD, arrived on July 15 in Guangzhou. (Quartz) Scanning the code using JD’s app loaded a webpage in the app’s browser titled “The wonderful journey of the beef.” Underneath those words was an image of a cow sitting in a meadow. The next page showed the cow’s serial number and a 64-digit alphanumeric code that refers to the sales transaction.
There was plenty to explore. I learned that my cow was three years old, weighed 605 kilograms (1,338 lbs), and was tended to by a local vet named Na Qin before being slaughtered on July 2. A Simmental breed, the cow lived on a farm Kerchin identified as “1556,” and was fed a diet of corn, wheat, and straw. (The number lets Kerchin track down a farm’s location.)
After the cow was slaughtered, its meat was then subjected to a number of tests to detect bacteria, water content, and animal-growth promoters. At this point, my steak was declared free of ractopamine, a drug banned in China that’s used to bulk up animals (paywall) weeks before they are slaughtered.
1."The wonderful journey of a cow." 2.The product's serial and blockchain number.3.The section of the cow, the cow's manufacture and package date. 4.The farm that raised the cow was indexed as "1556" which identifies the farm's geolocation. 1. “The wonderful journey of the beef.” 2. The steak’s serial number and 64-digit alphanumeric code referring to the transaction. 3. The farm that raised the cow was indexed as 1556. 4. The section of the cow, where it was processed, and its package date. (Quartz) 5.The Simmental cow was three years old. It was fed with corn, wheat, and straw. The cow had a vet named Na Qin. 6.It was slaughtered on July 2 and this product I brought from it were packed on July 5 in Liaotong in Inner Mongolia. 7.The product went through a series of tests to show whether it was contaminated with certain bacteria, and if the meat had good water content and a qualified outlook. 8.Finally, it was stored in the storage house on July 11. 5. The Simmental cow was three years old and fed a diet of corn, wheat, and straw. Na Qin was the vet that tended to it. 6. It was slaughtered on July 2, and the steak was packaged on July 5 in Tongliao city in Inner Mongolia. 7. It was put in the storage house on July 11. 8. The steak went through a series of tests to check whether it was contaminated with certain bacteria, and if the meat had good water content. (Quartz) While blockchain is hard to tamper with, tracing the origin from calf to chuck steak isn’t foolproof. John Spink, who studies food fraud at Michigan State University, tells Quartz that “fraudsters are very creative and constantly change their methods.” In a 2013 blog post, he wrote that “traceability is not a single magic-bullet to stop fraud, but it is a critical part” to reduce food fraud, adding that bad actors can also operate from within. “In some cases [the] criminals are hiding within the legitimate supply chain so [they] could defeat even a very technologically advanced countermeasure,” Spink says.
JD also admits that there might be lapses in data that’s tracked, depending on “when in the process each party begins to input and share that data,” says Gartner. Because of this, the company will periodically perform spot checks (link in Chinese) at Kerchin’s factories to examine how information is recorded and verify the validity of the data.
In blockchain we trust
Chinese agencies have experimented with other traceability methods in the ongoing fight to restore consumer faith in the food system. The National Platform for Tracking Food Safety, backed by China’s top planning body, has made more than 72.6 million items (link in Chinese) available for supply-chain tracing using barcodes as of Aug. 9. According to a 2015 paper in the peer-review journal BioScience Trends, the implementation of food traceability requires “a substantial amount of valid information.” However, China’s supply chain often involves small factories, and they generally lack dedicated platforms for the exchange of logistics information.
A number of companies besides JD believe blockchain can solve this problem.
Alibaba, China’s largest e-commerce player, announced in March a plan to use blockchain to track beef from Australia, one of China’s key sources of beef, by working with three local companies, including accounting firm PwC Australia.
In October, Walmart also introduced blockchain into its Food Safety Collaboration Center in Beijing. Working with IBM, the retailer recently completed two pilot tests to help move pork from Chinese farms to its stores. The technology has helped reduce the paperwork required to process the containers. Those sorts of transportation documents, such as the bill of lading, can easily be tampered with or copied, making the supply chain vulnerable to criminals who can replace goods with counterfeit products, a type of maritime fraud (paywall) that costs billions of dollars each year.
Starting this September, China will work with the European Union on a project called EU-China-Safe, which will use blockchain and other technologies to regulate food safety. Belfast-based startup Arc-Net, one of the project’s partners, has developed a blockchain platform to support the identification of animal protein from birth, says CEO Kieran Kelly.
It’s possible that scaling up blockchain could allow consumers to track all products back to their source, but that vision is still far away. In the case of Kerchin’s beef, only two parties are involved in gathering and uploading information, but the global supply chain poses more challenges. Cargo often passes rapidly through multiple hands, and parties in supply chains don’t often share data. As the supply chain gets longer, an enormous amount of collaboration and transparency is needed. And of course, not all companies are eager to share their data and business practices.
Read next: In China, consumers have to be on guard not just against fake food, but also fake news about food
Read next: China’s FDA is using instant messaging and web videos in its fight to dispel internet food rumors
即時要聞 2017-08-15 00:01:56 HKT
歡樂天地
歌手陳康健自小在觀塘成長,在他印象中,舊麥就是個滿載溫馨的地方,「細個婆婆湊我,晨早流流落去食早餐好開心,唔知點解架喎!總之麥當勞就係開心啲好食啲。」以前的舊麥有蘋果樹、供小朋友坐的矮小BB凳,「蘋果樹下面就係開生日會嗰個位,細個好想開,廣告成日『荼毒』你,如果我都係度開生日會就好啦!」廣告歌是這樣唱:時時渴望生日,做主角最開心?不過陳自言家境不算富裕沒當上生日會主角,倒是在上中學後仍然流連舊麥:以前觀塘廣場及裕民坊有大量機舖,男生們去「抄碟」後自然去舊麥「吹水」,聊到口乾更不愁糧食,只因有做兼職的朋友補給支援,「唔知點解買20蚊有40蚊嘢食,成個薯條餐嚟。(舊麥)基本上好似歡樂天地咁。」
隨著年紀漸長,陳已少光顧麥記,但舊麥的多元生態仍然令他著迷,不同人不同事都在這裡發生,「好多人做功課、溫書、拍拖,麥記拍拖我覺得都幾浪漫,以前唔需要任何嘢,有個地方你hea已經好開心。」勾起年少時的戀愛回憶和成長點滴,他不想舊麥執笠,「第一係回憶,第二係方便,以後又少一個地方坐低傾偈食嘢,之後要去好遠。」即使再開一間麥記,他相信也不是同一回事,「唔係嗰種feel。」
舊麥保留溫情,新開的走向變革,他認為麥當勞也迎合時代改變推廣,無法回到以前賣溫情的日子,「香港好似冇咗以前嗰種歡笑,太天真啦,人地唔會覺得有呢樣嘢。宜家個世界複雜咗,好多科技令到成個社會甚至屋企改變,正如點會覺得麥當勞係溫馨,只係去吹下水, 有啲錢寧願去間好啲嘅cafe坐。」
社區中心
活在觀塘創辦人袁智仁小時候曾短暫住過觀塘,離開再回來已經是2003年大學時期,他透露大學的初戀女友在觀塘住,故差不多每日都經過觀塘舊麥,「如果尖沙嘴(等人)係五枝旗桿,咁觀塘等人一定係舊麥,唔係(康寧道)新麥。呢個係觀塘街坊或者所有人嘅地標。」
這個地標的威水史,是她曾經於1981年10月18日打破全球麥當勞每日最高惠顧人次紀錄,威風但不離地,舊麥價錢親民又多座位,是觀塘人重要的活動中心。袁智仁憶述,創辦活在觀塘專頁,不時會與義工們約在舊麥開會,「我哋呢啲冇乜錢嘅NGO,一定約喺麥當勞開會,麥當勞係我哋嘅流動會客室,見記者、見街坊都一定喺麥記。」少年少女在此聊天、打遊戲對決、補習,長者坐在舊麥攤開棋盤對弈,「呢度某程度係觀塘街坊嘅公共空間,一個非正式的嘅社區中心。」
回帶上世紀8、90年代的觀塘,袁表示觀塘與尖沙嘴旺角等市中心相比是偏遠地區,但麥當勞某程度代表了先進、文明、西方文化,以前的學生要默書高分才可去食麥記,「對好多屋邨小朋友嚟講,係一個好開心嘅象徵,代表你好接近世界,係我哋睇世界嘅一個窗口。」同期裕民坊發展繁華,仁愛圍附近的CD店據聞連歌手黃耀明也去尋寶,街上有不同走鬼檔、大排檔,大腸、海鮮草根美食齊全,還有兩所戲院,加上麥當勞進駐,社區充滿活力,「自給自足,好似觀塘人嘅尖沙嘴咁。」
不過重建項目將觀塘活力打沉,地標一個個在地圖上消失,袁對舊麥的結束感到傷心,「當最後嘅地標都冇,呢度就好似變成零。觀塘兩個字就只係地鐵站名,入面冇晒歷史,如果有下一代問當年我哋喺邊度等人,連呢啲痕跡都冇,係非常之失落。」他形容僅餘的舊麥已不只是集體回憶,24小時開放為裕民坊的烏燈黑火點一盞明燈,「將來呢度冇嘢食、冇補給,變晒死城,地方好黑會有治安問題,麥記結業不只回憶問題,仲衍生重建過渡中間治安安排,係點樣呢?街坊需要公共空間,政府規劃可唔可以做得好啲?」
舊麥8月30日結業告別觀塘人。陳善南攝(蘋果日報)
歌手陳康健自小在觀塘區長大,他坦言麥當勞結業後一切已不同,「冇晒feel(感覺)。」張柏基攝(蘋果日報)
袁智仁當年因初戀女友住觀塘,令他與舊麥結緣。陳善南攝(蘋果日報)
棋盤戰線也由公園移師舊麥舉行。陳善南攝(蘋果日報)
舊麥佔地大,巨大玻璃窗是老顧客的最愛位置。陳善南攝(蘋果日報)
夜幕低垂,舊轉各個角落進駐了借宿的麥難民。夏家朗攝(蘋果日報)
一塊玻璃隔絕了宵夜人與麥難民,猶如兩個世界。夏家朗攝(蘋果日報)
舊麥霓虹燈為遊子點一盞燈,結業後的裕民坊又由誰在漆黑中引路?夏家朗攝(蘋果日報)
午夜一時環顧舊麥,各個角落、長椅上瑟縮了一個個借宿的「麥難民」,另一邊廂年輕人結群宵夜,偌大空間只餘下他們的嬉笑聲與高分貝背景音樂。
坐在角落的Ben說,每逢午夜這裡的音樂特別大聲,加上年輕人的放肆,「好嘈。(背景音樂)12點後特登開大聲啲。」尋一絲安寧就要用上耳塞,他見過有宿者忍受不了太吵的年輕人阻礙睡眠,「嗌佢哋『咪嘈呀!』,(有效嗎?)靜一陣囉,一陣又嘈過,哈哈。」
Ben家住舊樓天台屋,但獨居生活太悶,任職清潔的他每逢收工就來舊麥坐坐,貪圖它有冷氣無蚊,順便見見打躉的老朋友,「傾吓馬仔,賭波,鹹鹹濕濕嗰啲嘢,呢度幾乎全部都係男人嘛。」在這裡睡覺有潛規矩嗎?「冇,早嚟早霸位。唔好太早瞓,太早經理會拍醒你。」
有時他會去其他麥記看看,憶述前年坪石邨麥當勞有「麥難民」死去,那邊晚上從此便封起肇事位置不讓人借宿,「貪得意走去睇吓啫,我都唔去嗰邊。」最終還是流連舊麥,Ben戲言麥記是「麥當勞大酒店」,「個個都可以嚟過夜,幾好呀,夏天又有冷氣歎,最緊要冇蚊。」日間見到市民在玻璃房內開生日會,見過有怪人婆婆呃年輕人請食薯條,夜晚遇到會介紹散工的同路人,還有年逾七旬的黃伯。
黃伯撐著自製柺杖一柺一柺在餐廳裡走動,不時轉位找個最舒適的位置。他住玉蓮臺,但與鄰里關係不和,家中中層對正地鐵站無風,夏天翳焗就來舊麥,「好多晚都嚟,有冷氣嘛,屋企住嗰度冇冷氣,二嚟啲鄰居唔係幾好。」無兒無女無人家庭,索性來「瞌眼瞓」,他也去過附近地庫麥記,但嫌環境骯髒,不及舊麥乾淨。黃伯伯說時一邊取出用餐單據,一邊咳嗽,「喉嚨痛,我唔鍾意講嘢。(平時有人同你傾偈嗎?)舊時有社工嚟,宜家冇。」之後又指住單據,向記者分享每餐食乜餸。
談到舊麥結業,這些寄居者不約而同指無太大感覺,黃伯說「無乜所謂啫,去茶餐廳坐好過。」Ben指屆時會「跟大隊」隨熟悉的老友記一同遷徙去其他分店。在社會底層掙扎求存,談集體回憶似乎太奢侈:「無話唔捨得,都知佢就拆㗎啦,十幾年前已經話要重建,啲嘢係咁啦。重建好都唔關你事,啲樓咁貴。」