Adults learn language to fluency nearly as well as children: study
274 bluffroom 7 hrs 145
news.ycombinator.com/item?id=19886718 Unable to load the content
Member since 2017-07-15T03:50:57Z. Last seen 2025-04-09T16:00:01Z.
2754 blog posts. 128 comments.
274 bluffroom 7 hrs 145
news.ycombinator.com/item?id=19886718 Unable to load the content
美國總統特朗普一聲怒吼,中美股市終於迎來期待已久的似樣調整。有說恒指將跌至28000水平,但這個重要嗎?個股走勢和指數可以大相逕庭,猜指數來做什麼?且在悲觀的人眼裏,任何事都屬悲觀。但經過這麼多次金融風暴後,股市還不是不斷地創新高。眼前的貿易戰升溫,不過是歷史長河裏的一點浪花。
目前內地經濟狀況如何?一定不是很好。人口紅利漸去,老年人愈來愈多,人口自然增長很慢。貿易戰影響出口,債務問題也沒大改變,只是政策稍稍放鬆,樓價(和滙價)還是太高,這些因素都會繼續影響內地經濟的競爭力。悲觀者從來都不怕沒有理由。
看經濟不如睇貨幣政策
但買股票從來都不是買經濟。經濟好的時候,大部分企業都賺錢。不一定能賺大錢,但最少能生存下去。只有經濟下行的時候,才能看到企業管理層的真正實力,好企業才有擴張市場佔有率的機會,好股票才有機會大平賣。
股市是有關企業盈利和央行貨幣政策的判斷。去年中美股市大跌,不一定跟貿易戰有關,更主要的原因,會否是中美央行同時「收水」?中央於去年第三季轉向,於是恒指在10月見底。聯儲局在年底轉向,美股亦同時見底。你看今年貨幣政策似會再度收緊嗎?
現在美國聯儲局5月起開始縮減QT(量化緊縮)規模,今年內大概都不會重推QT。人民銀行可能暫時不會大規模放鬆,但只要市場有需要,估計還是會進一步放鬆。既然兩大央行都願意為市場包底,又有什麼好擔心的?
中港每隔一段時間,情緒都會轉變,去年很悲觀,今年則很樂觀。現在是有點悲觀,說不定下半年又再開始樂觀。
股樓賺大錢 一字記之曰坐
企業盈利方面,美國科技股第一季業績普遍好過預期,內地消費類股份同樣表現不錯,尤其電器、旅遊、醫療等行業。不要被股市急挫所影響,股價大跌其實是在為你服務。事後回看,沙士時買樓、熔斷時買股票,回報有多好。看似愈壞的時候,便往往是愈好的投資時機。投資不是通過炒賣來賺錢,大部分利潤都是「坐」回來的(在樓市和股市賺大錢的讀者都明白)。是長期投資,不是長期斷估股市哪裏是底。It's time in the market, not market timing that counts。
悲觀者總有一天會正確,因為市場總會有調整時刻。但投資不是辯論比賽,多年來資產價格不斷上漲,已說明只有樂觀者才會笑到最後。
作者為證監會9號業務持牌人
hcl.hkej@gmail.com
(編者按:郝承林著作《致富新世代2──科網君臨天下》現已發售)
歡迎訂購:實體書、電子書
596 bennylope 11 hrs 106
https://github.com/kkuchta/css-only-chat news.ycombinator.com/item?id=19852105 Unable to load the content
What do you get when you combine three of tech’s biggest buzzwords: AI, blockchain, and 5G? Perhaps ridiculously fast, amazingly abundant wireless data. Jessica Rosenworcel, a commissioner at the US Federal Communications Commission, believes that artificial intelligence and blockchain technology will give the US an edge in next-generation wireless networking over its big technological rival, China. Speaking at the Business of Blockchain, an event organized by the MIT Media Lab’s Digital Currency Initiative and MIT Technology Review, Rosenworcel said AI and blockchains would allow wireless devices to use different frequencies within what is known as the wireless spectrum more dynamically and flexibly, enabling billions of devices to connect to 5G networks at once. Machine learning will help wireless devices and networks share and negotiate over spectrum, Rosenworcel said, while distributed, cryptographically secured ledgers will help them keep track of who has access to what. Currently, the wireless spectrum is divided up for different uses. This avoids interference but isn’t the most efficient use of the airwaves. The suite of technologies known as 5G allows devices to connect in a variety of ways, and over a range of the wireless spectrum. With speeds of up to 20 gigabits per second, as well as greatly reduced latency, 5G smartphones should be able to run high-quality virtual-reality applications or download movies in seconds. With greater network capacity, 5G should also let many more devices connect to the internet—everything from wearables to washing machines. Rosenworcel said it will be imperative to devise better ways to allocate the spectrum. “If you think about the internet of things, with 50 billion devices, and wireless input for all of them—we should figure out a real-time market for the wireless spectrum,” she said. The commissioner pointed to a competitionbeing organized by the Defense Advanced Research Projects Agency (DARPA) to devise new ways of negotiating over spectrum using AI. She said the FCC had recently begun researching whether a blockchain could help too. “If you put this on a public blockchain, you would have this public record of demand and could design systems differently,” Rosenworcel said. Just as the wireless data available to smartphones has spurred technological progress, 5G should underpin innovation across the tech industry. The White House seems increasingly concerned that the US might cede its position as a technology leader in 5G, with potentially dire consequences for its economy. This worry is behind the scrutiny of Huawei, one of China’s most prominent and powerful companies. “I am concerned that we are not positioned to lead,” Rosenworcel said at the MIT event. But she added that AI and blockchains could be crucial to helping the US stay competitive with China in wireless technology. “I don’t think of it as the immediate future of wireless, but it might be the far future—5 to 10 years hence,” she said. In the US, China, and elsewhere, interest is growing in using AI to help advance wireless technologies, but this hasn’t yet found its way into 5G networking products. The National Institute of Standards and Technology (NIST) is currently researching how machine learning could help carve up the wireless spectrum. “Many problems in wireless networks that require processing large amounts of data and making decisions quickly can benefit from AI,” says Michael Souryal, the NIST researcher who leads the agency’s work. “One example that we have been studying is the use of AI for real-time signal detection and classification, which is important for spectrum sharing.” Muriel Médard, a professor of electrical engineering at MIT, says more is needed than just new ways to manage spectrum using AI or blockchains. Specifically, Médard says, “coding” schemes, which determine how packets of information get routed, are required. “The other work is fundamentally worthwhile, but it needs another technology, too,” she says.
正當全世界都以為中美貿易戰談判將於本周內有望達成協議,然後大家可以齊齊唱番首《皆大歡喜》,股樓皆飛!由於憧憬協議可以達成,所以股市及樓市早已率先爆升,提前慶祝,其中樓市尤其瘋狂。
就以將軍澳一個新盤為例,截至上周五,共收逾1.8萬個認購登記,超額認購35倍,破該樓盤發展商的新盤開售紀錄,亦是今年以來的新盤收票王,如果以每個認購登記需要遞交的每張本票10萬元作計算,單是一個新盤的認購登記已經凍結市場資金逾18億元!
不過,狂人特朗普只在其社交網站放上一個推文,卻令到昨天的港股一開市已經跌個落花流水,恒指甚至曾跌過千點,最後收市重挫871點,跌幅接近3%。股市已經率先反應,至於之前已經入票搶樓的市民,不知會否因為股市氣氛突變而臨陣退縮、殺訂退票?還是繼續相信香港的樓市是「唯我獨尊」,毋懼風浪?
特朗普計算時間剛剛好
究竟特朗普在這場中美貿易戰Show hand賭局中使出的這招「大」你,是否能夠奏效?可以迫使中方在原定今個星期在美國華盛頓舉行的新一輪貿易談判中作出最大的讓步?再過兩三天大家便可以知道。
不過特朗普就肯定是把時間計得剛剛好,原定周三中方代表團前往華盛頓,而特朗普便選擇在本周五向2000億美元中國貨品加徵關稅,整個姿態就是中方如在本周四前的談判中無法滿足美方提出的要求,周五便立即行動。特朗普的推文一出,今天中港股市已經立即兵敗如山倒,恐怕國家隊出動也難敵沽勢,特朗普這一招如同拿着一把刀架在中方頸上。
貿戰開打 港人食花生
中國外交部發言人已經表明中方代表團周三會繼續飛赴華盛頓進行談判,究竟中方將會乖乖就範?還是選擇局部妥協,然後繼續耍出拖字訣?尚有幾天大家便可以知道答案。不過如果中方乖乖就範,隨時會被國內不同派系大罵簽署喪權辱國的貿易戰協議,如果不簽,周五便要開始為2000億美元中國進口商品承擔25%的關稅成本,同時甚至需要為餘下3250億美元尚未徵收懲罰性關稅的中國貨品,承受短期內可能被加徵25%關稅的風險。中方可謂無論如何抉擇都要承受一定的後果。
雖然昨天外電引述消息人士指出,中國已指示國家隊要做好準備,隨時入市穩定股市,同時中國人民銀行昨早亦罕有地在內地金融市場開市前宣布,5月15日起對中小銀行下調存款準備金率,以穩定市場預期和信心,紓緩中小企的壓力,但是未來兩天,股滙壓力也肯定會成為中美雙方的各自籌碼之一。
雖然中美貿易戰下,香港也不能獨善其身,但是不少香港人卻抱着食花生的心態,等看特朗普如何把中國舞得團團轉,這種現象有趣得來卻諷刺,為何回歸逾二十載,香港人的國家觀念反而愈走愈遠!
zoelamyy@gmail.com
Published An Hour Ago Updated 16 Min Ago Abigail Hess @ABIGAILJHESS
4:45 Why this 10-year-old coder was noticed by Google and Microsoft Scroll through Samaira Mehta’s Instagram and you’ll see that she is a lot like other kids her age. She posts about having a lemonade stand, going swimming and doing the “In My Feelings” dance challenge.
But she also stands out from other 10-year-olds — Mehta is the CEO, founder and inventor of CoderBunnyz, a board game that teaches players as young as 4 basic coding concepts. Players draw and move their bunny piece along the board with the goal of eating carrots and hopping to their final destination.
“CoderBunnyz will basically teach you all the concepts you ever need in computer programming,” Mehta tells CNBC Make It. “There’s the very basic concepts like sequencing and conditionals to more advanced concepts like loops, functions, stack, queue, lists, parallelism, inheritance and many others.”
Mehta says she first conceptualized the board game when she was “about 6-and-a-half, maybe 7,” after her father, an engineer who serves as an official adviser for the company, started teaching her how to code. As she researched learning materials for first-time coders, Mehta noticed there was an opening in the market for a product that helped young people pick up programming.
OTU: Samaira Mehta Samaira Mehta She started by sketching how she wanted the game to be designed. Then, with the help of her family, she connected with graphic designers and game manufacturers in China and New Zealand. After exchanging dozens of emails, Mehta settled on a product that she says she’s very proud of.
“My family is very much involved in my business,” she says. Her mother oversees marketing and social media for CoderBunnyz and her little brother tests the games.
Since inventing CoderBunnyz, Mehta also invented a second game called CoderMindz, a coding-based artificial intelligence (AI) board game that teachers basic AI concepts using the Java programming language.
“I’m really passionate about coding,” says the budding entrepreneur. “I want the kids to be the same way, because coding is the future and coding is what the world will depend on in the next 10 to 15 years. So if kids learn to code now, [when] they grow up they can think of coding maybe as a career option.”
So far, Mehta says her company has generated about $200,000 in revenue since April 2018 and sold about 6,000 games. She says she is reinvesting that money in the company, saving for college and donating to charities that address homelessness in her community.
OTU: CoderBunnyz close up game CoderBunnyz At first, Mehta sold the board games through her website and stored the games in her garage.
“We used to pack every order we got,” she says. “And when it started building up, and we started getting more orders, we were not able to fulfill that many, so we were seeing if we could get it on Amazon, and luckily within almost just the first year, we got it on Amazon.”
Today, Mehta has a team that helps package the games and Amazon helps fulfill shipping.
The business venture has taken Mehta to schools, libraries and companies like Facebook, Microsoft and Intel, where she has held workshops for employers and their kids about how to get young people involved with coding.
OTU: Samaira coding But it was her trip to Google, and the opportunity to meet Google’s chief cultural officer, Stacy Sullivan, that left the strongest impression on the 10-year-old. “She said if I grew up I could probably work at Google if I like. And then she also said ‘Oh but you’ll probably have your own company by then,’” says Mehta. “She inspired me to work harder and it was just a great moment in my life.”
Mehta has also gotten words of encouragement from former First Lady Michelle Obama, who wrote the girl a letter in 2016 offering her words of support. “It was really cool receiving a letter from her,” says Mehta. “She just told me to keep working harder and that I’m an inspiration to all.”
For now, Mehta is focused on making CoderBunnyz a success and doing well in school. She says her dream college is Stanford University, and that her dream job is being an entrepreneur.
“I would say I already have it now, because I am an entrepreneur,” she says. “But I want to expand on that and I want to become an entrepreneur that helps people and does good for the community.”
Like this story? Subscribe to CNBC Make It on YouTube! Don’t miss:
Fewer than 1 in 5 Americans think the college admissions process is fair White House proposes capping student loans and cutting repayment options—what that means for students Average tuition at private colleges is $35,830 a year—but here’s how much students actually pay
170 vector_spaces 14 hrs 108
https://engineering.mixpanel.com/2011/08/05/how-and-why-we-switched-from-erlang-to-python/ news.ycombinator.com/item?id=19772349
Unable to load the content
https://engineering.mixpanel.com/2011/08/05/how-and-why-we-switched-from-erlang-to-python/
Organize a Study Group/Book Club/Online Group/Event: How to Do It 12 ingve 3 hrs 1 http://stephaniehurlburt.com/blog/2019/3/27/you-should-organize-a-study-groupbook-clubonline-groupevent-tips-on-how-to-do-it news.ycombinator.com/item?id=19655731 Not long after I left school, I missed certain parts of it. Not enough of it to want to go back-- every time I considered that, I remembered all I didn't want-- but enough to try to recreate the good parts for myself.
Some good parts:
Stimulating intellectual discussion
Learning from more experienced people
Concrete tasks to work on, structure
Feedback from others on those tasks
Meeting people with similar interests
Some bad parts:
Ugly power dynamics between professors/authority figures and students
High praise & value placed on grades & scores
Heavy workload that made it hard to maintain work/life balance, especially while working jobs too
High pressure associated with scarcity- only so many could get into top schools, only so many get A's, etc
Timed & graded tests
Not enough space/time to truly play and explore and mess up and keep playing
Since then, I've organized many different types of groups and gotten quite good at community organizing. Some of the groups I've organized throughout the years: math study groups, game engine forum/chats, psychology book clubs, programming workshops, electronics workshops, and more!
I've written one post on how I organized the workshops back in 2016, you can find it here: http://stephaniehurlburt.com/blog/2016/11/1/guide-to-running-technology-workshops
Here, I'll discuss some new developments to my thinking along with other practices that are more applicable outside of workshops. Keep in mind this is not exhaustive at all-- please let me know if you'd like elaboration on anything or more content.
I hope this inspires you to organize some groups of your own and make the most of it.
How to gather people?
If you're a newbie, the best advice I can give is to structure the group in such a way that you're okay with only 2-3 people joining. Surely you can find 2-3 others who are interested? You can gather people from work, other social gatherings, social media, friends of friends who mentioned it to people they know, or even public sites like meetup.com .
Being a newbie at organizing is tricky, because you're going to have an easier time with people you know (they'll be more forgiving of your mistakes), but strangers allow you to help more people and make a truly diverse event. Either way, don't be afraid to start small. I talked a little more on this in the workshop blog post but I want to be sure to mention this point.
Code of Conduct
This was something I didn't always have when first organizing groups years ago, and I learned over time why it's necessary.
Some aspects of a good code of conduct:
Proper descriptions of inappropriate behavior. Trust me, it's a rare day that someone who acted badly thinks they're abusive, racist, sexist, or that they even did anything wrong. Even when it's obvious. Community organizing will surprise you like that. You need to find a way to describe inappropriate behavior that even someone who thinks they're in the right can agree with. For instance, someone is unlikely to admit they were sexist, but they could admit they gave unsolicited advice and didn't speak from their own experience using "I" statements. Not the best example, but hopefully you see what I'm getting at.
It's about more than just preventing disasters— it’s about defining the social context of the event. Good code of conducts give people a sense of what to expect from interaction with that group, and set the tone for what kind of space it is. Asking someone on a date might feel borderline harassing at a professional gathering (depending on how senior the person is, gender ratio, etc), but is perfectly expected at a singles' mixer. Talking about a trauma you went through with strangers might feel toxic and creepily out of place in a game meetup, and welcome in a psychology book club. Context matters. Set the tone.
Ways to report violations, ideally ways that work even if an organizer with power and favor did the bad thing. This is a real tough one, and I imagine one that will constantly evolve. We can look to politics for examples. These days, I keep it simple with my small and short-lived groups and opt for a "benevolent dictator" model-- all violations reported to me, I can kick anyone out for any reason (helps if I messed up with point #1, which you'll inevitably do). However, this is not a great model for longer-running groups or bigger groups, as trust me, all organizers are flawed and it does create a bad power dynamic (exactly what I'd like to avoid).
Make reporting easy and obvious. One of my favorite creative examples is signs I saw at one event in the women's bathrooms about instructions on how to report sexual harassment, including words you could use to discretely report it while the harasser was still bothering you (like ordering certain drinks). But you can have more straightforward ways, just consider how violations could happen and how you'll handle it.
Have mild consequences for mild violations or little corrections to members, not just "you're kicked out at any violation whatsoever."
This is an example of one I used at my latest small book club: https://pastebin.com/6kA93uPV
Here's an example of a larger one: https://www.contributor-covenant.org/
Great post with lots more detail (written by someone which much much more experience than me!) on code of conducts: https://www.ashedryden.com/blog/codes-of-conduct-101-faq
Ways to Interact
Digital Chats
Keep in mind, people don't like having an extra tab or app open. Unless your group really provides something special, and even if it does, it'll often silently die if you require this. Lately I've been liking Twitter DM chats for small, short-lived groups. I've also had a lot of success with forums like Google Groups that send members e-mail updates, because everyone checks their e-mail, and success with Slack if the group had enough interest/momentum. The Slack ones have died fast when momentum dropped, though-- that requires consistent activity and interest.
In Person Gatherings
Talked about a bit in the workshop blog post (linked to in the Introduction). I've used mostly the same ideas, and since then also realized how many community spaces Seattle has-- for instance, an extra room in my local coffee shop they reserve for free, or a long table in another that I can also be sure no one sits at during the meetup time. Check out your local coffee shops, bookstores, library, any low-cost space (for instance, I wouldn't recommend a restaurant unless you could cover the bill yourself) for community spaces. Tech companies or coworking spaces like WeWork also host events for free, often just requesting they get to pitch your attendees on whatever they're offering (hiring positions, getting memberships, etc).
Location is a socioeconomic issue and matters. Whenever possible, inconvenience the people with money and make it convenient for those in poorer areas.
Video Calls
I usually send out a link that anyone can click on to join, typically using Hangouts. However, people who are more professional than me tend to prefer Zoom and I'd recommend it from my experience. Also check out the software of choice of digital conferences-- things get trickier the bigger your audience. I've seen one organizer do a smaller Hangouts group that was streamed with live chat, but it's nice when audience members can participate more than that.
Structure
Possibly the most interesting topic!
Book Clubs
I've found 100 pages every two weeks is a manageable amount and it seems the internet agrees with me. This is about 2 hours of focused reading, which you would think could be done in a shorter time, but keep in mind people (including me) often need to get in the right headspace and feel at peace enough to read the book, which may only happen on one lucky Saturday morning in those two weeks.
I set guidelines upfront that the topics should stay around the book-- for instance, quoting passages, asking questions about a concept the book introduces, mentioning related resources, or sharing experiences related to the book. Fiction will be different than non-fiction, I've mostly done nonfiction psychology texts.
Set an example. If you're contributing thoughtfully in the way you want others to, other thoughtful contributions that mirror yours will fall in.
I like a mix of chat and video calls for this format, so remote people can join.
A note on moderating discussions:
Since this is heavily discussion-based, I keep in mind lessons I've learned in other areas of life. From seminars in college, I learned-- 1/3 of your speaking should be questions, 1/3 should be rephrasing/underlining what others have said, and 1/3 should be your own contributions. From group therapy sessions, I learned taking breaks in discussion and allowing silence is so important for peace and inclusion. I also learned the power of "I" statements, not giving unsolicited advice to others, and avoiding sensitive topics. I love moderating discussions and examining what makes a healthy discussion, and I think I'm going to incorporate more discussions in different kinds of groups.
Study Groups
I've experienced both great failures and great successes with study groups, mostly focused on mathematics.
The bad ones were ones where it started to resemble the bad parts of school-- too much work and structure, and elitism/talking down to newbies. It's surprisingly hard to squash the latter as an organizer, you want to prevent those issues before they come up.
I came to learn that it's important to:
Let people still participate even if they fall behind, and if you do keep a schedule read the book club rules, make it easy to keep up with even if life is busy
Be careful about requiring previous knowledge, even on advanced texts. I'd caution against it. Let people self-select out or in.
On that note, schedules aren't necessary at all. I've run study groups that are pretty broad (say, mathematics themed) and just brought a bunch of library books for all to grab
Bring supplies and copies of the books, keep in mind not everyone has much money. In the past I even brought gift cards I'd discretely distribute before the study group started to those who needed it, so they could buy their own books for next time
Keep watch on members tutoring each other. Unsolicited, and sometimes even solicited, tutoring can very quickly turn condescending and overbearing during a time that's supposed to be a pleasant hobby activity. Setting the appropriate tone is the way to go to prevent this.
Let people talk casually about their lives and be generally social (unlike the book club), but make sure there's still an area that's peaceful/relatively quiet or an expectation that it's okay not to join in on the discussion (I often explicitly said this). Moderate what the social expectations are, what the context is
I tend to like in person gatherings at coffee shops (no reservations made, just showed up) for this format.
Workshops
See workshop post.
General networking groups
Every time I've organized something like this it's never had much structure, though that's normal for this type of group-- people often make meetups or conferences with talks that transition to general networking.
I typically do a slack group or e-mail forum with any theme apparent in the title and code of conduct very visible-- especially important whenever the group is mostly unstructured/unthemed discussion and interaction. One thing I've found that's important is that I actually have the bandwidth/energy to check in regularly to moderate. As a result, many of these types of groups have died over the years (though some have also died because of implicit or explicit code of conduct violations from members that were too bad for the group to come back from). Nonetheless, I keep coming back to this format because I find it very rewarding when done well.
I tend not to like in person for this format because I am not a fan of an audience being talked to. I'd prefer something more collaborative that didn't imply weird power dynamics about who is special enough to get to be the speaker and who has to just be a mute.
Small groups grabbing meals are very common and tend to work well. You've got to keep an eye on making sure the attendees are from diverse backgrounds and some are out of your network. The problem with these small groups, of course, is a lot more people would love to join.
Another kind is general networking surrounding something more passive, like demo booths set up. I actually really like this concept as a way to allow more people to come to the event while not elevating anyone to be a speaker or panelist. I think in the future I may experiment with larger moderated discussions too.
PUBLISHED MON, APR 8 2019 • 2:12 PM EDT UPDATED MON, APR 8 2019 • 3:35 PM EDT Kate Rooney @KR00NEY KEY POINTS The modern version of capitalism isn’t working, according to some of the country’s richest people. Warren Buffett, Jamie Dimon, Ray Dalio, Howard Schultz and other business leaders are calling for fixes to widening income inequality and under-investment in public education. Democratic presidential hopefuls Elizabeth Warren and Bernie Sanders are campaigning on higher taxes on the wealthy, and some left-leaning policies are sparking a renewed debate over socialism vs. capitalism. Billionaires are hardly looking to pivot to socialism. Dimon warns socialism would be “a disaster,” while Dalio underlines that capitalism shouldn’t be destroyed, it just needs to present an equal opportunity. American billionaires are calling for changes to the system that enabled them to get rich.
Warren Buffett, Jamie Dimon, Ray Dalio, Bill Gates and a list of others say that capitalism in its current form simply doesn’t work for the rest of the United States. Some of their remedies involve higher taxes.
Hedge fund titan Ray Dalio is the most recent to criticize the current economic system. On Monday, the Bridgewater founder told CNBC that while it doesn’t need to be destroyed, capitalism does need to present an equal opportunity, which Dalio said he received through public education.
“I’m capitalist, I’m a professional capitalist. The system has worked for me,” Dalio said during a “Squawk Box ” interview on Monday. “I didn’t have anything and then I got something through the capitalist system.”
The issue chafing billionaires and politicians alike is a growing income gap.
WATCH NOW VIDEO24:46 Watch CNBC’s full interview with Ray Dalio on reforming capitalism The inequality between rich and poor Americans is as high as it was in late 1930s, Dalio pointed out in a paper posted online last week. The wealth of the top 1 percent of the population is now more than that of the bottom 90 percent of the population combined. Dalio called growing inequality and lack of investment in public education “an existential risk for the U.S.” He and his wife announced a $100 million to the state of Connecticut for public education this week.
Source: Bridgewater Associates
Among the fixes, Dalio floated raising “more from the top via taxes that would be engineered to not have disruptive effects on productivity.” He also advocated for public-private partnerships, and in a CBS “60 Minutes” interview that aired Sunday, supported raising taxes on the wealthy. But the important thing “is to take those tax dollars and make them productive,” Dalio told CBS.
Dimon, Buffett, Gates Jamie Dimon is also frustrated with the income gap. In a letter to shareholders last week, the J.P. Morgan Chase CEO outlined a list of problems plaguing the majority of Americans. Among the remedies, could be higher taxes on the 1 percent, he said.
“If that happens, the wealthy should remember that if we improve our society and our economy, then they, in effect, are among the main winners,” Dimon said.
Berkshire Hathaway CEO Warren Buffett — third on Forbe’s 2019 billionaires list — has repeatedly said the wealthy should be taxed more. In 2006, the CEO committed to give all of his Berkshire Hathaway stock to philanthropic foundations. He and Bill and Melinda Gates have asked hundreds of wealthy Americans to pledge at least 50 percent of their wealth to charity in the so-called “the Giving Pledge.” There are now 190 people signed on, including Facebook CEO Mark Zuckerberg and Netflix CEO Reed Hastings.
In a 2011 New York Times op-ed, titled “Stop Coddling the Super-Rich,” Buffett called for a tax increase on everyone making more than $1 million and an even bigger hike on Americans making more than $10 million or more. After the 2017 Republican tax plan was signed into law, Buffett told CNBC “I don’t think I need a tax cut.”
“The wealthy are definitely undertaxed relative to the general population,” he told CNBC’s Becky Quick during a February “Squawk Box” interview.
Gates, a close friend of Buffett and one spot above him on the Forbe’s list, has also called for higher taxes. Although the Microsoft founder said he’s paid more than $10 billion in taxes, “the government should require people in my position to pay significantly higher taxes.”
“There’s no doubt that as we raise taxes, we can have most of that additional money come from those who are better off,” Gates said during a conversation with his wife Melinda and hundreds of high school students in New York City in February.
GP: Jamie Dimon Bill Gates U.S. Treasury Department And USAID Host Financial Inclusion Forum Jamie Dimon, chief executive officer of JPMorgan Chase & Co., left, speaks during a financial inclusion forum with Billionaire Bill Gates, chairman and founder of Microsoft Corp. Andrew Harrer | Bloomberg | Getty Images Disney heiress Abigail Disney has pointed to ballooning CEO pay as part of the problem. Along with roughly 200 other New York millionaires, she asked state lawmakers to introduce a “millionaires tax” on those making more than $5 million to help fund affordable housing, infrastructure and other initiatives.
“If your CEO salary is at the 700, 600, 500 times your median workers’ pay, there is nobody on Earth, Jesus Christ himself isn’t worth 500 times his median workers’ pay,” she told CNBC earlier this year. The granddaughter of Roy Disney, co-founder of The Walt Disney Co. declined to comment on whether she thinks Disney CEO Bob Iger is paid too much.
Capitalism vs. socialism Washington is split on the issue.
Several Democrats, including presidential hopefuls and senators Elizabeth Warren and Bernie Sanders, are campaigning on higher taxes on the wealthy. Freshman Congresswoman Alexandria Ocasio-Cortez — a self-proclaimed Democratic Socialist — has called for a tax rate as high as 70 percent. The plan, laid out in a CBS “60 Minutes” interview with Anderson Cooper, has had no shortage of push back from corporate America. Former Federal Reserve chairman Alan Greenspan for one, called it “a terrible idea.”
The inequality issue has fueled a revival of the debate between the two economic systems. President Donald Trump and some Republicans have warned of consequences if Democrats and therefore left-leaning ideologies win next year’s presidential election. During his State of the Union address last month, Trump said, “We are alarmed by new calls to adopt socialism in our country” and that “we renew our resolve that America will never be a socialist country.”
In re-writing the tax code in 2017, the GOP lowered tax rates, which advocates said would juice the economy and result in a larger GDP to pay for them. Individual taxes for the most part came down, as well as those on American companies.
Lee Cooperman, who signed Buffett and Gates’s Giving Pledge, has been critical of the left’s current progressive tax policies. Earlier this year, Amazon scrapped plans to open its so-called HQ2 amid opposition surrounding $3 billion in incentives the city and state promised Amazon.
“What we have is a bunch of candidates running on the Democratic ticket that are left-leaning, and that’s, in my opinion, very counterproductive and destructive,” said Cooperman, who has signed The Giving Pledge, meaning he agreed to donate most of his wealth to charitable causes.
22 rurban 12 hrs 10
Take note of the title of this blog. What you read below will give you some additional knowledge that may help you identify a hidden camera but it certainly won’t guarantee that.
We got lucky (if you can say that), the host had the hidden camera on the same network as the wifi that he allowed us access to and the stream was not protected (required authentication to access).
If a camera is hidden well and is not on the network (i.e. records to an internal memory card) or is on a network that you don’t have access to it may be very difficult to identify.
Below are the steps we are now taking when booking/staying at an Airbnb (or similar).
Step 1: Read the house listing in detail and view all photos
Before you book a house make sure you aren’t setting yourself up to fail.
It is Airbnb policy that any cameras be detailed in the listing. Scroll through all the details as there doesn’t seem to be a defined place for the host to list these details. Details could be in the description, the amenities section, the safety features or the house rules. If a camera can be seen in a photo of the house listing then Airbnb deems that you have been notified of the cameras. So, in summary if you want to avoid a house with cameras ensure that when you book it you have not been notified that there are cameras.
Step 2: Do a physical check of the house
First, understand what you are looking for. Non-hidden cameras are pretty obvious and are readily identifiable as cameras. Hidden cameras unfortunately come in all shapes and sizes and are hidden in numerous objects. Here are some examples:
Smoke Detector – http://www.brickhousesecurity.com
http://www.brickhousesecurity.com
Fan – http://www.brickhousesecurity.com
Clock – http://www.brickhousesecurity.com
Yes, they even come disguised as screws.
This was the one we found. The hidden camera is in the enclosure on the left (the one on the right is a real smoke detector). If you zoom in on the picture you can see that I have stuffed tissue paper in the hole where the camera lens was so that we could work out what to do without the host watching us. A camera has to “see” to take an image so it needs a hole (although that can be as small as a pinhole) or a clear substrate to see through.
Look for anything unusual in the rooms e.g. 2 smoke detectors in the same room, an alarm sensor but no alarm pad. A hidden camera will typically be placed so it has a field of view of what the person wants to see. So usually on a ceiling, bedside table, bathrooms, corners of rooms etc. Remember that these cameras typically have a wide field of view. Have a close look at any devices you find and see if you can see any lens. You can take photos using your phone up close with the flash or shine lights on them in the dark – the lens will usually reflect light. Step 3: Scan the house network to identify potential cameras
Before we start you should note that a camera will not be discoverable on the network in the following circumstances:
The camera is not on the same network as the network you have connected to. That is, if a host wants to hide a camera they can connect it to a network that you may not have access to. If the camera records to an internal memory card then it doesn’t need to be connected to a network and therefore wont show up on one. Connect to host accommodation network. Typically all host houses provide wireless access with a password. Once you are connected to the hosts wireless network you can then access the network to discover what is on it. Launch network scanning app and scan for devices connected to the network. In my case I used an Android app called “Network Scanner – First Row”. There are many alternatives for Windows, Mac, iOS etc.
This app automatically scans the network you are connected to and displays the IP address of the device and the Manufacturer. See example output below.
As you can see the scan has identified a number of devices on the network including the Gateway (wireless access point / router), my laptop and several phones (Huawei and LG).
At this stage you should be looking for any giveaway signs that one of the devices is a camera. For example, the manufacturer could be IPCAMERA. (That was the case in the one I discovered 😂)
In this example the device at the bottom has raised my suspicions as it is not one of my devices (tick off your devices and see whats left) and the manufacturer is not a well known brand. For example we found it common to find the likes of Nest heating devices on the network.
I then use a port scanning app to see what ports (different options to connect to a device) are open on the device. This typically helps me identify the device.
I used the Android “Network Mapper” app.
Once you run the app you need to enter in the IP address of the device you want to port scan. In this case it is 192.168.0.116
The output of that port scan is below:
The details you are interested in are the Open ports detailed at the bottom of the scan.
In this example there are 4 ports open: 81, 554, 1935 and 8080 and the port scanner notes the typical service used on these ports.
You can Google the services and for this one you would discover that RTSP and RTMP services are used to stream video. Any ports that appear with the service HTTP or HTTPS can be attempted to be connected to with your web browser.
For example here is the output below when I connect to Port 8080 with my web browser:
The important thing to note from the output of the above is the mention of ONVIF. ONVIF is a standardised way of connecting to IP security cameras.
So, we can be fairly certain that this is a camera. In this instance this is a external camera that we were made aware of.
In our case finding and accessing the hidden camera was easy because:
it was on the same nework as the wifi we were given access to it was named in the network scan as an “IPCAMERA” it had a live stream running on port 80 which could be connect to without requiring authentication (login and password) In many cases accessing a hidden camera video stream may require more “invasive” techniques (e.g. password bruteforcing, vulnerability exploitation etc). Essentially hacking the devices to get access. Be aware that this may be illegal.
So, I hope that has provided some guidance. Just be careful sometimes a little knowledge can get you all paranoid without there actually being an issue.
51 bdon 14 hrs 9
pieterhpieterh wrote on 29 Jan 2012 12:33
tweak.png My tweet "Still amazed by the power of engineers to over-design. Complexity is easy, folks, it's simplicity that is hard" got over 50 retweets. Clearly I touched a nerve in a world swimming in hopeless complexity. But talk is easy. How do we design for simplicity? Well, I've got a process, which I will explain. I call this process "Simplicity Oriented Design", or SOD.
Before we get to SOD, let's look at two other classic design processes. These don't work, yet are firmly applied by a majority of engineers and designers, especially in software where it's possible to construct byzantine complexity. They are slow-motion tragedies but can be fun to watch, from a safe distance.
Trash-Oriented Design
The most popular design process in large businesses seems to be "Trash Oriented Design", or TOD. TOD feeds off the belief that all we need to make money are great ideas. It's tenacious nonsense but a powerful crutch for people who lack imagination. The theory goes that ideas are rare, so the trick is to capture them. It's like non-musicians being awed by a guitar player, not realizing that great talent is so cheap it literally plays on the streets for coins.
The main output of TODs are expensive "ideations", concepts, design documents, and finally products that go straight into the trash can. It works as follows:
The Creative People come up with long lists of "we could do X and Y". I've seen endlessly detailed lists of everything amazing a product could do. Once the creative work of idea generation has happened, it's just a matter of execution, of course. So the managers and their consultants pass their brilliant, world shattering ideas to "user experience" designers. These talented designers take the tens of ideas the managers came up with, and turn them into hundreds of amazing, world-changing designs. These bountiful and detailed designs get passed to engineers, who scratch their heads and wonder who the heck came up with such stupid nonsense. They start to argue back but the designs come from up high, and really, it's not up to engineers to argue with creative people and expensive consultants. So the engineers creep back to their cubicles, humiliated and threatened into building the gigantic but oh so elegant pile of crap. It is bone-breakingly hard work since the designs take no account of practical costs. Minor whims might take weeks of work to build. As the project gets delayed, the managers bully the engineers into giving up their evenings and weekends. Eventually, something resembling a working product makes it out of the door. It's creaky and fragile, complex and ugly. The designers curse the engineers for their incompetence and pay more consultants to put lipstick onto the pig, and slowly the product starts to look a little nicer. By this time, the managers have started to try to sell the product and they find, shockingly, that no-one wants it. Undaunted and courageously they build million-dollar web sites and ad campaigns to explain to the public why they absolutely need this product. They do deals with other businesses to force the product on the lazy, stupid and ungrateful market. After twelve months of intense marketing, the product still isn't making profits. Worse, it suffers dramatic failures and gets branded in the press as a disaster. The company quietly shelves it, fires the consultants, buys a competing product from a small start-up and rebrands that as its own Version 2. Hundreds of millions of dollars end up in the trash. Meanwhile, another visionary manager, somewhere in the organization, drinks a little too much tequila with some marketing people and has a Brilliant Idea. TOD would be a caricature if it wasn't so common. Something like 19 out of 20 market-ready products built by large firms are failures. The remaining 1 in 20 probably only succeeds because the competitors are so bad.
The main lessons of TOD are quite straight-forward but hard to swallow. They are:
Ideas are cheap. No exceptions. There are no brilliant ideas. Anyone who tries to start a discussion with "oooh, we can do this too!" should be beaten down with all the passion one reserves for traveling musicians. It is like sitting in a cafe at the foot of a mountain, drinking a hot chocolate and telling others, "hey, I have a great idea, we can climb that mountain! And build a chalet on top! With two saunas! And a garden! Hey, and we can make it solar powered! Dude, that's awesome! What color should we paint it? Green! No, blue! OK, go and make it, I'll stay here and make spreadsheets and graphics!" The starting point for a good design process is to collect problems that confront people. The second step is to evaluate these problems with the basic question, "how much is it worth to solve this problem?" Having done that, we can collect a set of problems that are worth solving. Good solutions to real problems will succeed as products. Their success will depend on how good and cheap the solution is, and how important the problem is. But their success will also depend on how much they demand in effort to use, in other words how simple they are. Hence after slaying the dragon of utter irrelevance, we attack the demon of complexity.
Complexity-Oriented Design
Really good engineering teams and small firms can usually build good products. But the vast majority of products still end up being too complex and less successful than they might be. This is because specialist teams, even the best, often stubbornly apply a process I call "Complexity-Oriented Design", or COD, which works as follows:
Management correctly identifies some interesting and difficult problem with economic value. In doing so they already leapfrog over any TOD team. The team with enthusiasm start to build prototypes and core layers. These work as designed and thus encouraged, the team go off into intense design and architecture discussions, coming up with elegant schemas that look beautiful and solid. Management comes back and challenges team with yet more difficult problems. We tend to equate value with cost, so the harder the problem, and more expensive to solve, the more the solution should be worth, in their minds. The team, being engineers and thus loving to build stuff, build stuff. They build and build and build and end up with massive, perfectly designed complexity. The products go to market, and the market scratches its head and asks, "seriously, is this the best you can do?" People do use the products, especially if they aren't spending their own money in climbing the learning curve. Management gets positive feedback from its larger customers, who share the same idea that high cost (in training and use) means high value. and so continues to push the process. Meanwhile somewhere across the world, a small team is solving the same problem using SOD, and a year later smashes the market to little pieces. COD is characterized by a team obsessively solving the wrong problems to the point of ridiculousness. COD products tend to be large, ambitious, complex, and unpopular. Much open source software is the output of COD processes. It is insanely hard for engineers to stop extending a design to cover more potential problems. They argue, "what if someone wants to do X?" but never ask themselves, "what is the real value of solving X?"
A good example of COD in practice is Bluetooth, a complex, over-designed set of protocols that users hate. It continues to exist only because there are no alternatives. Bluetooth is perfectly secure, which is close to useless for a proximity protocol. At the same time it lacks a standard API for developers, meaning it's really costly to use Bluetooth in applications.
On the #zeromq IRC channel, Wintre wrote of how enraged he was many years ago when he "found that XMMS 2 had a working plugin system but could not actually play music."
COD is a form of large-scale "rabbit holing", in which designers and engineers cannot distance themselves from the technical details of their work. They add more and more features, utterly misreading the economics of their work.
The main lessons of COD are also simple but hard for experts to swallow. They are:
Making stuff that you don't immediately have a need for is pointless. Doesn't matter how talented or brilliant you are, if you just sit down and make stuff, you are most likely wasting your time. Problems are not equal. Some are simple, and some are complex. Ironically, solving the simpler problems often has more value to more people than solving the really hard ones. So if you allow engineers to just work on random things, they'll most focus on the most interesting but least worthwhile things. Engineers and designers love to make stuff and decoration, and this inevitably leads to complexity. It is crucial to have a "stop mechanism", a way to set short, hard deadlines that force people to make smaller, simpler answers to just the most crucial problems. Simplicity-Oriented Design
Simplicity-Oriented Design starts with a realization: we do not know what we have to make until after we start making it. Coming up with ideas, or large-scale designs isn't just wasteful, it's directly toxic to designing the truly accurate solutions. The really juicy problems are hidden like far valleys, and any activity except active scouting creates a fog that hides those distant valleys. You need to keep mobile, pack light, and move fast.
SOD works as follows:
We collect a set of interesting problems (by looking at how people use technology or other products) and we line these up from simple to complex, looking for and identifying patterns of use. We take the simplest, most dramatic problem and we solve this with a minimal plausible solution, or "patch". Each patch solves exactly a genuine and agreed problem in a brutally minimal fashion. We apply one measure of quality to patches, namely "can this be done any simpler while still solving the stated problem?" We can measure complexity in terms of concepts and models that the user has to learn or guess in order to use the patch. The fewer, the better. A perfect patch solves a problem with zero learning required by the user. Our product development consists of a patch that solves the problem "we need a proof of concept" and then evolves in an unbroken line to a mature series of products, through hundreds or thousands of patches piled on top of each other. We do not do anything that is not a patch. We enforce this rule with formal processes that demand that every activity or task is tied to a genuine and agreed problem, explicitly enunciated and documented. We build our projects into a supply chain where each project can provide problems to its "suppliers" and receive patches in return. The supply chain creates the "stop mechanism" since when people are impatiently waiting for an answer, we necessarily cut our work short. Individuals are free to work on any projects, and provide patches at any place they feel it's worthwhile. No individuals "own" any project, except to enforce the formal processes. A single project can have many variations, each a collection of different, competing patches. Projects export formal and documented interfaces so that upstream (client) projects are unaware of change happening in supplier projects. Thus multiple supplier projects can compete for client projects, in effect creating a free and competitive market. We tie our supply chain to real users and external clients and we drive the whole process by rapid cycles so that a problem received from outside users can be analyzed, evaluated, and solved with a patch in a few hours. At every moment from the very first patch, our product is shippable. This is essential, because a large proportion of patches will be wrong (10-30%) and only by giving the product to users can we know which patches have become problems and themselves need solving. SOD is a form of "hill climbing algorithm", a reliable way of finding optimal solutions to the most significant problems in an unknown landscape. You don't need to be a genius to use SOD successfully, you just need to be able to see the difference between the fog of activity and the progress towards new real problems.
A really good designer with a good team can use SOD to build world-class products, rapidly and accurately. To get the most out of SOD, the designer has to use the product continuously, from day 1, and develop his or her ability to smell out problems such as inconsistency, surprising behavior, and other forms of friction. We naturally overlook many annoyances but a good designer picks these up, and thinks about how to patch them. Design is about removing friction in the use of a product.
Conclusions
There are many aspects to getting product-building teams and organizations to think wisely. You need diversity, freedom, challenge, resources, and so on. I discuss these in detail in my forthcoming book, Culture and Empire. However, even if you have all the right ingredients, the default processes that skilled engineers and designers develop will result in complex, hard-to-use products.
The classic errors are to focus on ideas, not problems; to focus on the wrong problems; to misjudge the value of solving problems; not using ones' own work; and in many other ways to misjudge the real market.
Simplicity Oriented Design is a reliable, repeatable way of developing world-class products that delight users with their simplicity and elegance. This process organizes people into flexible supply chains that are able to navigate a problem landscape rapidly and cheaply. They do this by building, testing, and keeping or discarding minimal plausible solutions, called "patches". Living products consist of long series of patches, applied one atop the other.
SOD works best for software design, and some open source projects already work in this way. However many or most still fall into classic traps such as over-engineering and "what if" design. Wikipedia is a good example of SOD applied to the non-software domain.
I use SOD daily to build, and help others build, world-class commercial products. If you find SOD interesting and useful, read Culture and Empire when it comes out, particularly chapter two, where I explain the science of Social Architecture.
This article is an extract from Chapter 6 of the ZeroMQ book, on community building. If you like this, buy the book :-)
Cosmologist claims Universe may not be expanding (2013) 156 hairytrog 11 hrs 73 https://www.nature.com/news/cosmologist-claims-universe-may-not-be-expanding-1.13379 news.ycombinator.com/item?id=19588996
TAKE 27 LTD/SPL
The conventional model of cosmology is that most galaxies recede from one another as space itself inflates like the surface of a balloon — which would explain why other galaxies appear redshifted from our own galaxy's point of view. But one cosmologist has a different interpretation of that redshift.
It started with a bang, and has been expanding ever since. For nearly a century, this has been the standard view of the Universe. Now one cosmologist is proposing a radically different interpretation of events — in which the Universe is not expanding at all.
In a paper posted on the arXiv preprint server1, Christof Wetterich, a theoretical physicist at the University of Heidelberg in Germany, has devised a different cosmology in which the Universe is not expanding but the mass of everything has been increasing. Such an interpretation could help physicists to understand problematic issues such as the so-called singularity present at the Big Bang, he says.
Although the paper has yet to be peer-reviewed, none of the experts contacted by Nature dismissed it as obviously wrong, and some of them found the idea worth pursuing. “I think it’s fascinating to explore this alternative representation,” says Hongsheng Zhao, a cosmologist at the University of St Andrews, UK. “His treatment seems rigorous enough to be entertained.”
Astronomers measure whether objects are moving away from or towards Earth by analysing the light that their atoms emit or absorb, which comes in characteristic colours, or frequencies. When matter is moving away from us, these frequencies appear shifted towards the red, or lower-frequency, part of the spectrum, in the same way that we hear the pitch of an ambulance siren drop as it speeds past.
In the 1920s, astronomers including Georges Lemaître and Edwin Hubble found that most galaxies exhibit such a redshift — and that the redshift was greater for more distant galaxies. From these observations, they deduced that the Universe must be expanding.
Red herring
But, as Wetterich points out, the characteristic light emitted by atoms is also governed by the masses of the atoms' elementary particles, and in particular of their electrons. If an atom were to grow in mass, the photons it emits would become more energetic. Because higher energies correspond to higher frequencies, the emission and absorption frequencies would move towards the blue part of the spectrum. Conversely, if the particles were to become lighter, the frequencies would become redshifted.
Because the speed of light is finite, when we look at distant galaxies we are looking backwards in time — seeing them as they would have been when they emitted the light that we observe. If all masses were once lower, and had been constantly increasing, the colours of old galaxies would look redshifted in comparison to current frequencies, and the amount of redshift would be proportionate to their distances from Earth. Thus, the redshift would make galaxies seem to be receding even if they were not.
Work through the maths in this alternative interpretation of redshift, and all of cosmology looks very different. The Universe still expands rapidly during a short-lived period known as inflation. But prior to inflation, according to Wetterich, the Big Bang no longer contains a 'singularity' where the density of the Universe would be infinite. Instead, the Big Bang stretches out in the past over an essentially infinite period of time. And the current cosmos could be static, or even beginning to contract.
Purely theory
The idea may be plausible, but it comes with a big problem: it can't be tested. Mass is what’s known as a dimensional quantity, and can be measured only relative to something else. For instance, every mass on Earth is ultimately determined relative to a kilogram standard that sits in a vault on the outskirts of Paris, at the International Bureau of Weights and Measures. If the mass of everything — including the official kilogramme — has been growing proportionally over time, there could be no way to find out.
For Wetterich, the lack of an experimental test misses the point. He says that his interpretation could be useful for thinking about different cosmological models, in the same way that physicists use different interpretations of quantum mechanics that are all mathematically consistent. In particular, Wetterich says, the lack of a Big Bang singularity is a major advantage.
He will have a hard time winning everyone over to his interpretation. “I remain to be convinced about the advantage, or novelty, of this picture,” says Niayesh Afshordi, an astrophysicist at the Perimeter Institute in Waterloo, Canada. According to Afshordi, cosmologists envisage the Universe as expanding only because it is the most convenient interpretation of galaxies' redshift.
Others say that Wetterich’s interpretation could help to keep cosmologists from becoming entrenched in one way of thinking. “The field of cosmology these days is converging on a standard model, centred around inflation and the Big Bang,” says physicist Arjun Berera at the University of Edinburgh, UK. “This is why it’s as important as ever, before we get too comfortable, to see if there are alternative explanations consistent with all known observation.”
We Moved from Heroku to Google Kubernetes Engine 23 shosti 2 hrs 9 https://www.rainforestqa.com/blog/2019-04-02-why-we-moved-from-heroku-to-google-kubernetes-engine/ news.ycombinator.com/item?id=19578394 Until late last year, Rainforest ran most of our production applications on Heroku. Heroku was a terrific platform for Rainforest in many ways: it allowed us to scale and remain agile without hiring a large Ops team, and the overall developer experience is unparalleled. But in 2018 it became clear that we were beginning to outgrow Heroku. We ended up moving to Google Cloud Platform (GCP) with most of our applications running on Google Kubernetes Engine (GKE); here’s how we made the decision and picked our new DevOps tech stack.
Rationale: The 3 Main Driving Factors Behind The Switch
We are heavy Postgres users at Rainforest, with most of our customer data in a single large database (eventually we will probably split our data into smaller independent services, but that’s an engineering effort we don’t want to dive into quite yet). In 2018 we were rapidly approaching the size limits of our Heroku database plan (1 TB at the time) and we didn’t want to risk hitting any limits while still under contract with Heroku.
We were also running into limitations with Heroku on the compute side: some of our newer automation-based features involve running a large number of short-lived batch jobs, which doesn’t work well on Heroku (due to the relatively high cost of computing resources). As a stopgap we’ve been running a few services on AWS Batch, but we’ve never been particularly happy with that solution since we have multiple compute environments that are vastly different from an operational perspective (very few engineers on the team have a deep understanding of Batch).
As Rainforest grows, application security is of tantamount importance, and the “standard” Heroku offering was becoming problematic due to the lack of flexibility around security (for instance, the inability to set up Postgres instances in private networks). At one point we attempted to migrate to Heroku Shield to address some of these issues, but we found that it wasn’t a good fit for our application.
Perhaps surprisingly, cost was not the initial driving factor in the decision to move away from Heroku. Heroku has a reputation for being extremely expensive, but that hasn’t been our experience in general: when factoring in the savings from keeping a lean Ops team Heroku was quite cost-effective when compared to the major cloud providers. This was especially true because the bulk of our hosting costs go towards databases, and managed Postgres service costs are similar across cloud providers (including Heroku).
Nevertheless, Heroku’s costs were becoming an issue for a couple of reasons:
Heroku’s default runtime doesn’t include a number of security-related features that come “out of the box” with the major cloud providers, such as Virtual Private Cloud. Once those features become a requirement (which they were for us), Heroku becomes a much less cost-effective choice. GCP and AWS are both cheaper for raw computing resources than Heroku, and as mentioned earlier we haven’t been able to run all of our compute-intensive services on Heroku. When planning for future growth, we wanted a platform that could handle our web services and our more compute-intensive workloads with a common set of tooling. Our Heroku Setup
Heroku is very opinionated in how it expects you to run applications on its environment: all applications must follow 12-Factor guidelines to run well, and applications are always run in Heroku’s dynos which are not terribly flexible. These restrictions come with significant benefits, though. 12-Factor apps are easy to scale horizontally and have very few dependencies on their environment, making them easy to run in local development environments and port to new production environments. We followed the 12-Factor guidelines very closely for our applications, and persistent data was stored exclusively in Postgres or third-party services like S3.
For autoscaling, we used HireFire for most of our Heroku applications. Web workers were generally scaled based on load, but background workers were scaled based on queue sizes of various kind. (This turned out to be a tricky feature to mimic in most other PaaS offerings.)
Why Kubernetes?
Given that we were moving away from Heroku, we needed a new platform that would run our applications without too much porting work. We could have skipped containerized solutions entirely and run our code directly on VMs (using tools like capistrano to perform the actual deployment), but we quickly discarded this option for a number of reasons:
Our environment is heterogeneous: our biggest applications use Rails, but we also have smaller services written in Go, Elixir, Python, and Crystal. Maintaining separate deployment pipelines for each language would have been a major pain point. Setting up essential features such as autoscaling, high-availability, monitoring, and log aggregation would have involved significant development time, and it would have been virtually impossible to implement them in a vendor-agnostic way. Heroku behaves like a containerized environment (with similar technologies under the hood as Docker), and it was what our developers were used to. We would have had to see significant benefits to move to an alternative model. In general, the industry is moving towards containerized deployment for precisely reasons like these, and we didn’t see any compelling reasons to go against the trend.
With that in mind, we evaluated four major Docker-based platforms:
AWS Elastic Beanstalk
AWS markets Elastic Beanstalk as their “easy-to-use” way to run containerized applications. While this theoretically seemed like an interesting option, initial experiments showed it to be far from easy to use in practice. Elastic Beanstalk has also not seen many significant updates in quite some time, so AWS’s commitment to the product is unclear. It was an easy option to say no to.
Convox
One option we considered more seriously was Convox, which bills itself as an open-source alternative to Heroku (using AWS as its underlying infrastructure provider). Given that we fit their customer profile, the migration would probably have been fairly straightforward.
After some evaluation, though, we were concerned about relying on a platform with relatively little traction in the industry compared to the major cloud providers. Convox gives its customers direct access to underlying AWS resources, which is nice, but business changes at Convox could still have left us relying on an unsupported product—not a risk we were comfortable with for such a critical vendor. Convox was also missing a few key features related to autoscaling, which was the final nail in the coffin.
AWS Elastic Container Service (ECS)
ECS is more or less a direct competitor to Kubernetes, offering a way to run containerized applications with a great deal of flexibility (at the cost of complexity). We already had some exposure to ECS through AWS Batch (which is a layer on top of ECS) and we weren’t particularly impressed with the user experience. We also weren’t keen on the amount of vendor lock-in we’d be accepting by using ECS (it would have been impossible, for instance, to set up a production-like environment on developer laptops), or happy about the amount of development work it would have taken to set up custom autoscaling and similar features.
If no better alternatives existed we might have settled on ECS, but thankfully that wasn’t the case.
Kubernetes
Kubernetes was the clear standout among the options we considered for a number of reasons:
Kubernetes has a huge amount of traction in the DevOps landscape, with managed implementations from all the major cloud vendors and virtually endless training materials and complementary technologies. Kubernetes is open source, which was a major plus: it meant that we could avoid vendor lock-in and implement local development environments that mimic production. Kubernetes has a large feature set that fit well with our requirements, including our more exotic necessities like autoscaling based on custom metrics. Kubernetes’ detractors often say that its complexity is overkill for many situations. While it’s true that Kubernetes is an incredibly large and complicated piece of software, the basic abstractions are mostly intuitive and well thought-out and we’ve been able to side-step a lot of the complexity for a couple of reasons:
Kubernetes is a natural platform for 12-Factor apps, which have no need for data persistence, statefulness, and other hairy issues. Using a managed Kubernetes service as a client is orders of magnitude easier than actually running a Kubernetes cluster. Why Google Cloud Platform?
We had decided to use Kubernetes, so the question remained: which Kubernetes? Running a production-worthy Kubernetes cluster on raw VMs was not really a viable option for us (since our Ops team is still relatively small), so we evaluated managed Kubernetes services on the three most prominent cloud providers: AWS, GCP, and Azure.
Kubernetes was not our only requirement: we also needed managed Postgres and Redis services. This eliminated Azure as an option, since its managed Postgres service is relatively immature compared to AWS and GCP (with data size limits comparable to Heroku’s). That left AWS and GCP, which were equally good choices in most respects: cost projections were remarkably similar, and both platforms offer a great range of managed services.
There was, however, a huge difference between GKE, the managed Kubernetes service on GCP, and EKS, AWS’s equivalent. GKE is a far more mature product, with a number of essential features that EKS lacks:
GKE manages the Kubernetes master and nodes, while EKS only manages the master. With EKS, we would have had to maintain the Kubernetes nodes completely, including maintaining security updates. GKE manages autoscaling at the cluster level and also has terrific support for horizontal pod autoscaling at the application level, including support for autoscaling on custom metrics. At the time of evaluation, EKS had no support for cluster-level autoscaling and extremely limited support for horizontal pod autoscaling of any kind. Those differences only scratch the surface of the differences between GKE and EKS, but they were enough to eliminate EKS as a viable option.
The Tech Stack
With our big decisions made, we had to choose our new tech stack! When choosing technologies, we had a few guiding principles:
Mostly managed: Our Ops Team is still quite small given the scope of its duties, so we wanted to minimize cases where they were responsible for running complicated software stacks. Our strong preference was for managed services where available. Minimize change: The migration was inevitably going to be a large change for the engineering team, but we wanted to make the transition as painless as possible. Where feasible, we wanted to keep our existing providers and practices in place. Boring where possible: The “Cloud Native” DevOps landscape is in an exciting and fast-moving phase, with new technologies springing up seemingly overnight. Rainforest’s hosting needs are generally quite simple, however: most of our services are “traditional” Postgres-backed web applications that communicate over REST APIs or message queues. While we appreciate the architectural flexibility that comes with Kubernetes (especially in comparison to Heroku), for the initial migration we decided not to go too far down the rabbit-hole of using “cutting-edge” auxiliary technologies that are not strictly necessary for our use-case. With those guidelines in mind, we settled on the following technologies:
Terraform: One of our more consequential early decisions was to move to infrastructure-as-code wherever possible. Terraform isn’t perfect, but it’s by far the most popular and complete option for managing infrastructure-as-code, especially on GCP. (We’ve used the transition as an “excuse” to bring many other aspects of our infrastructure under management by Terraform.) Google Kubernetes Engine: Given our decision to use Kubernetes, GKE was a no-brainer—it’s fully managed and has a very rich feature-set. Cloud SQL for PostgreSQL: Our Postgres databases are probably the single most critical part of our infrastructure, so it was important to find a managed Postgres service that supported the features we wanted (such as high availability, automated backups, and internal network connectivity). Cloud SQL fit the bill. Cloud Memorystore: We are relatively light users of Redis, but we do use it as a caching layer for some applications. Cloud Memorystore is a relatively no-frills Redis implementation but was good enough for our needs. Helm: Helm fills in some “missing pieces” for deploying to Kubernetes (for instance, templating and release management). We chose it over alternatives due to its large community and relative simplicity. (For the actual deployment process, we use Cloud Build to build our applications’ Docker images and CircleCI to initiate releases. Stackdriver: Stackdriver is more or less the “default” logging and monitoring solution on GKE, and it has some integrations that were necessary for our implementation. We were able to keep most of our other existing infrastructure tools (such as Cloudflare and Statuspage with minimal changes.
There were also a few technologies that we considered but didn’t make the cut for the initial transition:
Istio: When we began the transition, installing and managing Istio was a manual process and seemed far too involved for our needs. GKE has since added built-in Istio support, which we may consider using in the future, but at our scale we don’t yet see the need for a service mesh. Vault: Vault has a number of compelling features for secrets management, but the fact that we would have to run it ourselves as a critical piece of infrastructure is a major disadvantage. We may consider adding it as part of a future infrastructure upgrade, however. Spinnaker, Weaveworks, and similar: Kubernetes allows for a huge amount of deployment flexibility, and there are a number of powerful CI/CD options that integrate with Kubernetes to implement things like customized deployment strategies. But we had a pre-existing CI/CD pipeline (using CircleCI) that we were quite happy with, so we decided to implement the minimal changes necessary to integrate with Kubernetes rather than try to implement something”fancier”. In a future post, I’ll cover the migration process itself.
每個時代的科技發展,都會引起關於機器取代人類、導致大量失業的憂慮。從負面角度看,機器的能力確實會跟人類有所重疊;但從正面角度看,機器也容許人類專注於只有人類能做的事情。因此,如何調整教育的側重點,增加人類能力與機器的區別和互補,就更形重要。
在原始社會,體力是人類生存的最重要因素之一。隨着機械技術的進步,很多體力工作都可由機器代勞,人類可更側重於智力工作。但人工智能的發展,則在智力的不同方面也產生區分:哪些方面的智力是可由機器取代、哪些方面的智力是人類獨有的呢?
電腦受限於數據範圍
要解答這個問題,讓我們首先回顧一下人工智能的基本運作模式。當然,難免有一些前沿的研究會超出以下的討論範圍,但粗略而言,大部分人工智能的算法都可歸納成3個字:找規律。我們把大量數據輸入電腦,讓電腦探究其中規律,然後便可對新的情況作出判斷。
當我說大部分人工智能都是「找規律」,完全沒有貶義;相反,通過複雜而精密的「找規律」,電腦得以處理這麼多問題,本身就是一件美妙的事情。但這類算法也有一些限制,以下會按重要性的遞增次序(increasing order of importance)拋磚引玉。
首先,由於人工智能算法是基於輸入的數據,它們未必能對現行範圍以外的情況作出太好的預測。再者,如果一個情況是屬於跟現行範圍不同的範式(paradigm),例如連變量的數目和角色都不一樣,則電腦會遇到更大困難。於是,電腦找出的局部最優解(local optimum),未必是全局最優解(global optimum)。
或許有人會問:如果現有數據無法預測那些超出範圍的情況,那麼人類又是如何預測?如果人類能預測,為何不能把人類的預測根據輸入電腦,讓電腦也能預測?答案是,人類的預測是基於人生多年累積的常識和大局觀。這些常識是分散在生活的各個角落,大局觀則帶有模糊成分,兩者都較難明確輸入電腦。
假設我們建造一個機械人擔任聯儲局主席,決定利率的調整。根據過往的大量實踐數據,機械人或可掌握利率和各項經濟數據關係的不少規律,但它未必清楚如果把利率調至極高或極低(例如負數)會發生什麼事,因為相關參考數據太少了。更顯著地,如果廢除聯儲局,讓自由市場決定利率,會否比現在的制度更佳?我在史丹福學過的眾多機器學習(machine learning)算法,我想不到有哪一個能有效解答這個問題。
電腦缺乏大局觀的毛病,也可體現於棋類遊戲。以象棋為例(不論西洋象棋還是中國象棋),電腦在開局和中局都遠勝於人類。但在一些特殊的殘局,尤其是剩下的子力不多,棋盤很空曠,每一着都有很多可能性時,電腦的優勢則沒有那麼明顯:因為時間有限,電腦和人腦都未必能把所有可能性細算,而是講求整體策略,這時人類的大局觀就可派上用場。
在藝術創作方面,電腦可以觀察客觀世界,以及學習前人作品裏的意象和情感,重組而創作出新的作品,但無法知道前人沒有表達過的情感。相反,人類則可通過內省(introspection)而得到新的創作靈感。例如王國維先生有3句詞:「試上高峰窺皓月,偶開天眼覷紅塵。可憐身是眼中人。」意思實在太過獨特;在王先生之前,即使有電腦,也不大可能寫得出來。
接下來我想討論一個更重要的問題,就是人工智能和因果判斷的關係。我們輸入人工智能算法的數據,只能透露不同變量之間的相關性(correlation),但沒有表達不同變量之間的因果關係(causation)。例如如果搜集一個房間的溫度和房內的人的出汗量的數據,可發現兩者有正面關係,但無法從數據推斷,究竟是溫度上升導致出汗,還是出汗導致溫度上升,還是有第三個變量既導致溫度上升,也導致出汗【註】。
現在,假設有人問:如果強行使人們出汗(例如為其注射某些藥物),這會對房間溫度有什麼影響?這就需要知道溫度和出汗之間的因果關係才能解答。對於某些問題,我們可以通過直接實驗來探索因果關係,但在很多情況下,直接實驗往往有可行性和道德性的制約因素。
因果判斷需經驗常識
當然,上述例子比較簡單,但一般來說,只有經過多年生活經驗的累積,我們腦中才有關於世界運作的可靠模型,讓我們作出因果判斷;如果要訓練電腦作出因果判斷,需要讓它們學習關於錯綜複雜的世界運作的模型,往往需時甚久。
綜上所述,人工智能的常見短處在於大局觀、創意和因果判斷,所以這些是人類在新時代不易被取代的能力。本文旨在作出這些判別;至於怎樣在教育中達致這些目標,本應留待其他文章處理,但這裏也可稍作涉獵。
在有限的學習時間和資源下,對任一問題的各種可能解答而言,學習的廣度和細緻程度之間不免要有取捨(trade-off)。隨着人工智能發展,電腦在一個狹窄範圍的不同選項之間的抉擇能力,將會達到不太需要人手參與的程度,但大局觀則是其弱項。因此,我們應更致力接觸廣闊範圍的不同可能解答,思考全局最優解大約在哪個範式之中。
例如回到上文提到的貨幣政策例子,我見過的所有大學經濟學課程,幾乎都是集中討論當中央銀行存在時,應如何計算最優貨幣政策,而沒有認真看待其他範式。
記得佛利民(Milton Friedman)曾說,我們必須讓各種邊緣思想——包括現時在政治上不可行的思想——保持生命力,因為當危機發生時,它們往往進入可行範圍,甚或是解決問題的良方。這個忠告在現時比以往任何時候都更適切。
盧安迪 史丹福大學經濟學系博士生
註:這裏有一個小問題,就是溫度上升和出汗之間有時間差,但這不影響討論的重點,因為不難想到一些時間差較難偵測到的例子;這裏只是選用了比較簡單的例子,以便帶出主要邏輯。
2019中國(深圳)IT領袖峰會今日舉行,港交所(00388)行政總裁李小加在峰會上表示,5G時代的發展應考慮資本運作,雲計算的算力將成為新的能源,如過去的大宗商品交易,將數據轉換成產品,最後賣給客戶並形成收益。而5G時代將出現新的交易所,會出現新的交易模式。
他續稱,資本有巨大動力希望能夠支撐5G的發展,但今天數據離資本很近卻又很遠,很近是指今天的海量數據,在經過像騰訊(00700)、阿里巴巴、京東的平台處理後已經產生一些產品,並不斷更新服務,特別是在大健康領域,數據與資本的距離很近,每一個數據都使得病人得到精準醫療的服務和診斷。
李小加指出,數據與資本卻又很遠,因為今天的數據或人工智能(AI)仍未能夠吸引足夠的資本,因為海量的數據仍是孤島,且絕大部分數據被閒置,主要是很多工作仍沒有開始,比如交易數據的確權、定價體系、標準、信用體系、溯源等,在數據鏈上,怎樣才能讓數據最終形成產品,並產生收益仍需要作出思考。