Member since 2017-07-15T03:50:57Z. Last seen 2025-01-02T20:05:01Z.
2729 blog posts. 128 comments.
an 13. 2010 22:02
在這裡向大家介紹的是美國建築大師:路易士.康作品中頗具代表性的建築作品:「沙克生物研究中心」。
此建築作品被列入世界十大現代代表建築之一,與萊特等人一同站在建築不朽殿堂。此作品為路易士.康建築理念的重要呈現,作品裡對於光線、建築材料之精神本質與對自然光線的已然應用純熟,是他所提出的靜謐與光明(BETWEEN SILENCE AND LIGHT)理念的完整展現,還有一種對於材料與本質本身的一種自我發言,光、量體、場域與人在此地自然地產生關係,一切都是如此的和諧自在,量體有其自我的展現等等的理念在此建築作品裡都能夠獲得理解與實踐。
沙克生物研究中心.jpg
沙克生物研究中心 Salk Institute for Biological Studies
1、建築名稱:沙克生物研究中心 Salk Institute for Biological Studies
2、基地地點:10010 North Torrey Pines Road La Jolla,California
3、建造時間:西元1959-1965年
4、基地環境:佔地27英畝(約11公頃),高出海平面350英呎(110公尺),面臨太平洋的懸崖峭壁上。
委託人的要求:
興建沙克生物研究中心的構想最早來自生物學家沙克(Jonas Salk)博士,沙克博士本身是一位在生物研究方面的開拓者,因此,他對康提到了許多深奧的問題,包括人類的未來、生命的意義和價值、人的本性…等,令康頗為著迷。
他希望未來的研究中心,有著僧侶隱居的環境,有拱廊、柱列、中庭等。沙克博士要求能有寬敞開闊、沒有任何障礙的內部空間,以因應未來科學研究多變性的需要,建築材料必須簡單、耐久,能經得起長年累月的時間考驗而不需太高的維護費用。
除此之外,研究機構的環境必須能使研究人員頭腦清靜,除卻不必要的聯想,設計要具人性化和審美觀,沙克博士要求建築師根據這些原則創造一座能讓畢卡索都覺得值得來參觀的場所。(王紀鯤,1994,P89-93)可見沙克博士對該建築的期望,康亦接受此一挑戰,並由其設計結果而了解康用心的程度。
沙克生物研究所設計理念中的精神:
康可以說是一個偉大的、哲學味道極濃厚的建築師,他的創作思想和作品著實不容易輕易的被理解。雖然康的文筆非常的優美,但他在闡述他的思想理論時卻總是含糊難明,這和他在大學十年間的「思考」和「等待」有非常大的關係,也就是成就近代建築大師的起點。在沙克生物研究所的設計中可以看出其精鍊的設計手法;完成的建築作品再配合上路易士.康自己的一些片段的敘述,可以從這一些線索看到些許清晰的理念。
對於沙克生物研究所建築乙案,康曾說:
「當沙克到事務所來要求我蓋一棟實驗室時,他說:『我希望有一件事能實現,我希望邀請畢卡索到這個實驗室來。』當然,他是在暗示科學(關於計量的)裡,連最微小的生命也有完成自我的意志。微生物希望成為微生物;玫瑰花希望成為一朵玫瑰花;而人希望成為一個人,去表達自己。沙克傳達了這種表達的欲望-科學家需要不可計量之出現,而那是藝術家的領域。」
沙克博士一生要求建築要能表現出醫學科學的人文涵意,致使康在做研究所的規劃設計時,思緒回溯到歷史上學者聚集的場所,如中世紀的修道院和其他學者隱居之所。他將研究所分為三組,一為交流會議部份,一為生活部份,一為研究所部份,三者分列為崖岸的不同部份。
此計劃案早於1959年即著手進行,原先之基本計劃為由研究所,宿舍及聚會所三處所構成,三者均配置於富起伏變化之基地上,各自獨立成一單元,並藉具有強烈引導性之通道,使三者互相呼應,而構成一個整體。
在具有都市計劃傳統之費城渡過大半生之路易士.康,在其40年代屢次參與費城之都市計劃,特別是對都市設計之造詣,更具超人一等之見地,就某個觀點看來,整個沙克研究所建築群的場域或可謂康首度經手的一個「超小型都市」的建築計劃。然而,沙克生物研究所是康對於一種新建築第一個完成的夢想,一種反映新人類的新建築。
建築量體發展的概念原型與場域的中心概念:
在本設計案之中,根據路易士.康的思想精神與實際建築量體展現的空間效果,可以整理出下列兩項特徵:
1、曼陀羅(Mandala)的概念:
康把沙克生物研究所設計成東方曼陀羅的式樣。在東方藝術裡曼陀羅代表自然的秩序,透過一組同心幾何形狀劃分出層次不斷向四方延展,每一個幾何形包含了一個神的象徵或神的屬性。在容氏心理學裡曼陀羅被視為是一種統一對自我之各種觀點的途徑。而在沙克生物研究所的研究建築體上就能夠發現康將他所擅用的基礎幾何型體如同曼陀羅一般的在基地平面上延展開來。
2、從外在驅體到內在精神的連貫思考:
康的建築物從外面包含樓梯廁所的設備空間(軀體)向內發展;經過實驗室空間──生物研究進行的──不透氣的封閉著,由電腦監察,同時被管道和設備的大空間所服務(心靈);經過走廊,那是與人會面的地方(社會);經過可看到海景而被栗木所屏遮的科學家私人辦公室,那是沉思的地方;最後到達有一條簡單的水流過的中庭,那是一個靜止的場所,一的面向天空的立面,一個沒有屋頂的教堂(精神)。
因此,順序是軀體>心靈>社會>精神:全人類的屬性象徵。一棟偉大的建築必須對這些每一項都能滿足,而且扮演一種整合它們的角色;(朱咸立譯,1989,P76)也就是說康對建築所追求的,從文化精神、社會需求、基地的尊重、人類心靈的探求,建築物都要滿足它。
36 kawera 1 hr 17
https://www.theguardian.com/news/2018/jan/19/post-work-the-radical-idea-of-a-world-without-jobs
Work is the master of the modern world. For most people, it is impossible to imagine society without it. It dominates and pervades everyday life – especially in Britain and the US – more completely than at any time in recent history. An obsession with employability runs through education. Even severely disabled welfare claimants are required to be work-seekers. Corporate superstars show off their epic work schedules. “Hard-working families” are idealised by politicians. Friends pitch each other business ideas. Tech companies persuade their employees that round-the-clock work is play. Gig economy companies claim that round-the-clock work is freedom. Workers commute further, strike less, retire later. Digital technology lets work invade leisure.
In all these mutually reinforcing ways, work increasingly forms our routines and psyches, and squeezes out other influences. As Joanna Biggs put it in her quietly disturbing 2015 book All Day Long: A Portrait of Britain at Work, “Work is … how we give our lives meaning when religion, party politics and community fall away.”
And yet work is not working, for ever more people, in ever more ways. We resist acknowledging these as more than isolated problems – such is work’s centrality to our belief systems – but the evidence of its failures is all around us.
As a source of subsistence, let alone prosperity, work is now insufficient for whole social classes. In the UK, almost two-thirds of those in poverty – around 8 million people – are in working households. In the US, the average wage has stagnated for half a century.
As a source of social mobility and self-worth, work increasingly fails even the most educated people – supposedly the system’s winners. In 2017, half of recent UK graduates were officially classified as “working in a non-graduate role”. In the US, “belief in work is crumbling among people in their 20s and 30s”, says Benjamin Hunnicutt, a leading historian of work. “They are not looking to their job for satisfaction or social advancement.” (You can sense this every time a graduate with a faraway look makes you a latte.)
Work is increasingly precarious: more zero-hours or short-term contracts; more self-employed people with erratic incomes; more corporate “restructurings” for those still with actual jobs. As a source of sustainable consumer booms and mass home-ownership – for much of the 20th century, the main successes of mainstream western economic policy – work is discredited daily by our ongoing debt and housing crises. For many people, not just the very wealthy, work has become less important financially than inheriting money or owning a home.
Whether you look at a screen all day, or sell other underpaid people goods they can’t afford, more and more work feels pointless or even socially damaging – what the American anthropologist David Graeber called “bullshit jobs” in a famous 2013 article. Among others, Graeber condemned “private equity CEOs, lobbyists, PR researchers … telemarketers, bailiffs”, and the “ancillary industries (dog-washers, all-night pizza delivery) that only exist because everyone is spending so much of their time working”.
The argument seemed subjective and crude, but economic data increasingly supports it. The growth of productivity, or the value of what is produced per hour worked, is slowing across the rich world – despite the constant measurement of employee performance and intensification of work routines that makes more and more jobs barely tolerable.
Unsurprisingly, work is increasingly regarded as bad for your health: “Stress … an overwhelming ‘to-do’ list … [and] long hours sitting at a desk,” the Cass Business School professor Peter Fleming notes in his new book, The Death of Homo Economicus, are beginning to be seen by medical authorities as akin to smoking.
Work is badly distributed. People have too much, or too little, or both in the same month. And away from our unpredictable, all-consuming workplaces, vital human activities are increasingly neglected. Workers lack the time or energy to raise children attentively, or to look after elderly relations. “The crisis of work is also a crisis of home,” declared the social theorists Helen Hester and Nick Srnicek in a paper last year. This neglect will only get worse as the population grows and ages.
And finally, beyond all these dysfunctions, loom the most-discussed, most existential threats to work as we know it: automation, and the state of the environment. Some recent estimates suggest that between a third and a half of all jobs could be taken over by artificial intelligence in the next two decades. Other forecasters doubt whether work can be sustained in its current, toxic form on a warming planet.
Like an empire that has expanded too far, work may be both more powerful and more vulnerable than ever before. We know work’s multiplying problems intimately, but it feels impossible to solve them all. Is it time to start thinking of an alternative?
Our culture of work strains to cover its flaws by claiming to be unavoidable and natural. “Mankind is hardwired to work,” as the Conservative MP Nick Boles puts it in a new book, Square Deal. It is an argument most of us have long internalised.
But not quite all. The idea of a world freed from work, wholly or in part, has been intermittently expressed – and mocked and suppressed – for as long as modern capitalism has existed. Repeatedly, the promise of less work has been prominent in visions of the future. In 1845, Karl Marx wrote that in a communist society workers would be freed from the monotony of a single draining job to “hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner”. In 1884, the socialist William Morris proposed that in “beautiful” factories of the future, surrounded by gardens for relaxation, employees should work only “four hours a day”.
In 1930, the economist John Maynard Keynes predicted that, by the early 21st century, advances in technology would lead to an “age of leisure and abundance”, in which people might work 15 hours a week. In 1980, as robots began to depopulate factories, the French social and economic theorist André Gorz declared: “The abolition of work is a process already underway … The manner in which [it] is to be managed … constitutes the central political issue of the coming decades.”
Since the early 2010s, as the crisis of work has become increasingly unavoidable in the US and the UK, these heretical ideas have been rediscovered and developed further. Brief polemics such as Graeber’s “bullshit jobs” have been followed by more nuanced books, creating a rapidly growing literature that critiques work as an ideology – sometimes labelling it “workism” – and explores what could take its place. A new anti-work movement has taken shape.
hamster wheel illustration for andy beckett long read on post-work 19 jan 2018 Illustration: Nathalie Lees for the Guardian Graeber, Hester, Srnicek, Hunnicutt, Fleming and others are members of a loose, transatlantic network of thinkers who advocate a profoundly different future for western economies and societies, and also for poorer countries, where the crises of work and the threat to it from robots and climate change are, they argue, even greater. They call this future “post-work”.
For some of these writers, this future must include a universal basic income (UBI) – currently post-work’s most high-profile and controversial idea – paid by the state to every working-age person, so that they can survive when the great automation comes. For others, the debate about the affordability and morality of a UBI is a distraction from even bigger issues.
Post-work may be a rather grey and academic-sounding phrase, but it offers enormous, alluring promises: that life with much less work, or no work at all, would be calmer, more equal, more communal, more pleasurable, more thoughtful, more politically engaged, more fulfilled – in short, that much of human experience would be transformed.
To many people, this will probably sound outlandish, foolishly optimistic – and quite possibly immoral. But the post-workists insist they are the realists now. “Either automation or the environment, or both, will force the way society thinks about work to change,” says David Frayne, a radical young Welsh academic whose 2015 book The Refusal of Work is one of the most persuasive post-work volumes. “So are we the utopians? Or are the utopians the people who think work is going to carry on as it is?”
One of post-work’s best arguments is that, contrary to conventional wisdom, the work ideology is neither natural nor very old. “Work as we know it is a recent construct,” says Hunnicutt. Like most historians, he identifies the main building blocks of our work culture as 16th-century Protestantism, which saw effortful labour as leading to a good afterlife; 19th-century industrial capitalism, which required disciplined workers and driven entrepreneurs; and the 20th-century desires for consumer goods and self-fulfillment.
The emergence of the modern work ethic from this chain of phenomena was “an accident of history,” Hunnicutt says. Before then, “All cultures thought of work as a means to an end, not an end in itself.” From urban ancient Greece to agrarian societies, work was either something to be outsourced to others – often slaves – or something to be done as quickly as possible so that the rest of life could happen.
Even once the new work ethic was established, working patterns continued to shift and be challenged. Between 1800 and 1900, the average working week in the west shrank from about 80 hours to about 60 hours. From 1900 to the 1970s, it shrank steadily further: to roughly 40 hours in the US and the UK. Trade union pressure, technological change, enlightened employers, and government legislation all progressively eroded the dominance of work.
Sometimes, economic shocks accelerated the process. In Britain in 1974, Edward Heath’s Conservative government, faced with a chronic energy shortage caused by an international oil crisis and a miners’ strike, imposed a national three-day working week. For the two months it lasted, people’s non-work lives expanded. Golf courses were busier, and fishing-tackle shops reported large sales increases. Audiences trebled for late-night BBC radio DJs such as John Peel. Some men did more housework: the Colchester Evening Gazette interviewed a young married printer who had taken over the hoovering. Even the Daily Mail loosened up, with one columnist suggesting that parents “experiment more in their sex lives while the children are doing a five-day week at school”.
Piccadilly Square in London during the three-day week of 1974. Piccadilly Square in London during the three-day week of 1974. Photograph: PA Archive The economic consequences were mixed. Most people’s earnings fell. Working days became longer. Yet a national survey of companies for the government by the management consultants Inbucon-AIC found that productivity improved by about 5%: a huge increase by Britain’s usual sluggish standards. “Thinking was stimulated” inside Whitehall and some companies, the consultants noted, “on the possibility of arranging a permanent four-day week.”
Nothing came of it. But during the 60s and 70s, ideas about redefining work, or escaping it altogether, were commonplace in Europe and the US: from corporate retreats to the counterculture to academia, where a new discipline was established: leisure studies, the study of recreations such as sport and travel.
In 1979, Bernard Lefkowitz, then a well-known American journalist, published Breaktime: Living Without Work in a Nine to Five World, a book based on interviews with 100 people who had given up their jobs. He found a former architect who tinkered with houseboats and bartered; an ex-reporter who canned his own tomatoes and listened to a lot of opera; and a former cleaner who enjoyed lie-ins and a sundeck overlooking the Pacific. Many of the interviewees were living in California, and despite moments of drift and doubt, they reported new feelings of “wholeness” and “openness to experience”.
By the end of the 70s, it was possible to believe that the relatively recent supremacy of work might be coming to an end in the more comfortable parts of the west. Labour-saving computer technologies were becoming widely available for the first time. Frequent strikes provided highly public examples of work routines being interrupted and challenged. And crucially, wages were high enough, for most people, to make working less a practical possibility.
Instead, the work ideology was reimposed. During the 80s, the aggressively pro-business governments of Margaret Thatcher and Ronald Reagan strengthened the power of employers, and used welfare cuts and moralistic rhetoric to create a much harsher environment for people without jobs. David Graeber, who is an anarchist as well as an anthropologist, argues that these policies were motivated by a desire for social control. After the political turbulence of the 60s and 70s, he says, “Conservatives freaked out at the prospect of everyone becoming hippies and abandoning work. They thought: ‘What will become of the social order?’”
It sounds like a conspiracy theory, but Hunnicutt, who has studied the ebb and flow of work in the west for almost 50 years, says Graeber has a point: “I do think there is a fear of freedom – a fear among the powerful that people might find something better to do than create profits for capitalism.”
During the 90s and 00s, the counter-revolution in favour of work was consolidated by centre-left politicians. In Britain under Tony Blair’s government, the political and cultural status of work reached a zenith. Unemployment was lower than it had been for decades. More women than ever were working. Wages for most people were rising. New Labour’s minimum wage and working tax credits lifted and subsidised the earnings of the low-paid. Poverty fell steadily. The chancellor Gordon Brown, one of the country’s most famous workaholics, appeared to have found a formula that linked work to social justice.
A large part of the left has always organised itself around work. Union activists have fought to preserve it, by opposing redundancies, and sometimes to extend it, by securing overtime agreements. “With the Labour party, the clue is in the name,” says Chuka Umunna, the centre-left Labour MP and former shadow business secretary, who has become a prominent critic of post-work thinking as it has spread beyond academia. The New Labour governments were also responding, Umunna says, to the failure of their Conservative predecessors to actually live up to their pro-work rhetoric: “There had been such high levels of unemployment under the Tories, our focus was always going to be pro-job.”
In this earnest, purposeful context, the anti-work tradition, when it was remembered at all, could seem a bit decadent. One of its few remaining British manifestations was the Idler magazine, which was set up in 1993 and acquired a cult status beyond its modest circulation. In its elegantly retro pages, often rather posh men wrote about the pleasures of laziness – while on the side busily producing books and newspaper articles, and running a creative consultancy with corporate clients, Idle Industries. By the early 21st century, the work culture seemed inescapable.
The work culture has many more critics now. In the US, sharp recent books such as Private Government: How Employers Rule Our Lives (and Why We Don’t Talk About It) by the philosopher Elizabeth Anderson, and No More Work: Why Full Employment Is a Bad Idea by the historian James Livingston, have challenged the dictatorial powers and assumptions of modern employers; and also the deeply embedded American notion that the solution to any problem is working harder.
In the UK, even professionally optimistic business journals have begun to register the extent of work’s crises. In his 2016 book The Wealth of Humans: Work and its Absence in the 21st Century, the Economist columnist Ryan Avent predicted that automation would lead to “a period of wrenching political change” before “a broadly acceptable social system” emerges.
Post-work ideas are also circulating in party politics. Last April, the Green party proposed that weekends be lengthened to three days. In 2016, shadow chancellor John McDonnell said Labour was “developing” a proposal for a UBI in the UK. Labour leader Jeremy Corbyn told his party conference last September that automation “can be the gateway for a new settlement between work and leisure – a springboard for expanded creativity and culture”.
“It felt like a watershed moment,” says Will Stronge, head of Autonomy, a British thinktank set up last year to explore the crisis of work and find ways out of it. “We’re in contact with Labour, and we’re going to meet the Greens soon.” Like most British post-workists, he is leftwing in his politics, part of the new milieu of ambitious young activist intellectuals that has grown up around Corbyn’s leadership. “We haven’t talked to people on the right,” Stronge admits. “No one’s got in contact with us.”
Yet post-work has the potential to appeal to conservatives. Some post-workists think work should not be abolished but redistributed, so that every adult labours for roughly the same satisfying but not exhausting number of hours. “We could say to people on the right: ‘You think work is good for people. So everyone should have this good thing,’” says James Smith, a post-workist whose day job is lecturing in 18th-century English literature at Royal Holloway, University of London. “Working less also ought to be attractive to conservatives who value the family.”
Outside the insular, intense working cultures of Britain and the US, the reduction of work has long been a mainstream notion. In France in 2000, Lionel Jospin’s leftwing coalition government introduced a maximum 35-hour week for all employees, partly to reduce unemployment and promote gender equality, under the slogan, “Work less – live more.” The law was not absolute (some overtime was permitted) and has been weakened since, but many employers have opted to keep a 35-hour week. In Germany, the largest trade union, IG Metall, which represents electrical and metal workers, is campaigning for shift workers and people caring for children or other relatives to have the option of a 28-hour week.
Even in Britain and the US, the vogues for “downshifting” and “work-life balance” during the 90s and 00s represented an admission that the intensification of work was damaging our lives. But these were solutions for individuals, and often wealthy individuals – the rock star Alex James attracted huge media attention for becoming a cheesemaker in the Cotswolds – rather than society as a whole. And these were solutions intended to bring minimal disruption to a free-market economy that was still relatively popular and functional. We are not in that world any more.
And yet the difficulty of shedding the burdens and satisfactions of work is obvious when you meet the post-workists. Explorers of a huge economic and social territory that has been neglected for decades– like Keynes and other thinkers who challenged the rule of work – they alternate between confidence and doubt.
“I love my job,” Helen Hester, a professor of media and communication at the University of West London, told me. “There’s no boundary between my time off and on. I’m always doing admin, or marking, or writing something. I’m working the equivalent of two jobs.” Later in our interview, which took place in a cafe, among other customers working on laptops – a ubiquitous modern example of leisure’s colonisation by work – she said knowingly but wearily: “Post-work is a lot of work.”
Yet the post-workists argue that it is precisely their work-saturated lives – and their experience of the increasing precarity of white-collar employment – that qualify them to demand a different world. Like many post-workists, Stronge has been employed for years on poorly paid, short-term academic contracts. “I’ve worked as a breakfast cook. I’ve been a Domino’s delivery driver,” he told me. “I once worked in an Indian restaurant while I was teaching. My students would come in to eat, and see me cooking, and say: ‘Hi, is that you, Will?’ Unconsciously, that’s why Autonomy came about.”
James Smith was the only post-workist I met who had decided to do less work. “I have one weekday off, and cram everything into the other days,” he said, as we sat in his overstuffed office on the Royal Holloway campus outside London. “I spend it with our one-and-a-half-year-old. It’s a very small post-work gesture. But it was a strange sensation at first: almost like launching myself off the side of a swimming pool. It felt alien – almost impossible to do, without the moral power of having a child to look after.”
Wheelbarrow and computer monitors in empty office Photograph: Getty Defenders of the work culture such as business leaders and mainstream politicians habitually question whether pent-up modern workers have the ability to enjoy, or even survive, the open vistas of time and freedom that post-work thinkers envisage for them. In 1989, two University of Chicago psychologists, Judith LeFevre and Mihaly Csikszentmihalyi, conducted a famous experiment that seemed to support this view. They recruited 78 people with manual, clerical and managerial jobs at local companies, and gave them electronic pagers. For a week, at frequent but random intervals, at work and at home, these employees were contacted and asked to fill in questionnaires about what they were doing and how they were feeling.
The experiment found that people reported “many more positive feelings at work than in leisure”. At work, they were regularly in a state the psychologists called “flow” – “enjoying the moment” by using their knowledge and abilities to the full, while also “learning new skills and increasing self-esteem”. Away from work, “flow” rarely occurred. The employees mainly chose “to watch TV, try to sleep, [and] in general vegetate, even though they [did] not enjoy doing these things”. US workers, the psychologists concluded, had an “inability to organise [their] psychic energy in unstructured free time”.
To the post-workists, such findings are simply a sign of how unhealthy the work culture has become. Our ability to do anything else, only exercised in short bursts, is like a muscle that has atrophied. Frayne told me: “My dad works for Corus Steel [now Tata Steel]. He’s 58, and he joined as an apprentice at 16. It’s manual work, in the hot mill. The night shifts are killing him. He’s going to get out in February. He’s been getting his savings and pension in order. But now he’s terrified. He says: ‘What am I going to do when I wake up in the house on a Tuesday morning?’ Leisure is a capacity.”
Graeber argues that in a less labour-intensive society, our capacity for things other than work could be built up again. “People will come up with stuff to do if you give them enough time. I lived in a village in Madagascar once. There was this intricate sociability. People would hang around in cafes, gossiping, having affairs, using magic. It was a very complex drama – the kind that can only develop when you have enough time. They certainly weren’t bored!”
In western countries too, he argues, the absence of work would produce a richer culture. “The postwar years, when people worked less and it was easier to be on the dole, produced beat poetry, avant garde theatre, 50-minute drum solos, and all Britain’s great pop music – art forms that take time to produce and consume.”
The return of the drum solo may not be everyone’s idea of progress. But the possibilities of post-work, like all visions of the future, walk a difficult line between being too concrete and too airy. Stronge suggests a daily routine for post-work citizens that would include a provocative degree of state involvement: “You get your UBI payment from the government. Then you get a form from your local council telling you about things going on in your area: a five-a-side football tournament, say, or community activism – Big Society stuff, almost.” Other scenarios he proposes may disappoint those who dream of non-stop leisure: “I’m under no illusion that paid work is going to disappear entirely. It just may not be directed by someone else. You take as long as you want, have a long lunch, spread the work though the day.”
Town and city centres today are arranged for work and consumption – work’s co-conspirator – and very little else; this is one of the reasons a post-work world is so hard to imagine. Adapting office blocks and other workplaces for other purposes would be a huge task, which the post-workists have only just begun to think about. One common proposal is for a new type of public building, usually envisaged as a well-equipped combination of library, leisure centre and artists’ studios. “It could have social and care spaces, equipment for programming, for making videos and music, record decks,” says Stronge. “It would be way beyond a community centre, which can be quite … depressing.”
This vision of state-supported but liberated and productive citizens owes a lot to Ivan Illich, the half-forgotten Austrian social critic who was a leftwing guru during the 70s. In his intoxicating 1973 book Tools for Conviviality, Illich attacked the “serfdom” created by industrial machinery, and demanded: “Give people tools that guarantee their right to work with high, independent efficiency … from power drills to mechanised pushcarts.” Illich wanted the public to rediscover what he saw as the freedom of the medieval artisan, while also embracing the latest technology.
There is a strong artisan tendency in today’s post-work movement. As Hester characterises it: “Instead of having jobs, we’re going to do craft, to make our own clothes. It’s quite an exclusionary vision: to do those things, you need to be able-bodied.” She also detects a deeper conservative impulse: “It’s almost as if some people are saying: ‘Since we’re going to challenge work, other things have to stay the same.’”
Instead, she would like the movement to think more radically about the nuclear home and family. Both have been so shaped by work, she argues, that a post-work society will redraw them. The disappearance of the paid job could finally bring about one of the oldest goals of feminism: that housework and raising children are no longer accorded a lower status. With people having more time, and probably less money, private life could also become more communal, she suggests, with families sharing kitchens, domestic appliances, and larger facilities. “There have been examples of this before,” she says, “like ‘Red Vienna’ in the early 20th century, when the [social democratic] city government built housing estates with communal laundries, workshops, and shared living spaces that were quite luxurious.” Post-work is about the future, but it is also bursting with the past’s lost possibilities.
Now that work is so ubiquitous and dominant, will today’s post-workists succeed where all their other predecessors did not? In Britain, possibly the sharpest outside judge of the movement is Frederick Harry Pitts, a lecturer in management at Bristol University. Pitts used to be a post-workist himself. He is young and leftwing, and before academia he worked in call centres: he knows how awful a lot of modern work is. Yet Pitts is suspicious of how closely the life post-workists envisage – creative, collaborative, high-minded – resembles the life they already live. “There is little wonder the uptake for post-work thinking has been so strong among journalists and academics, as well as artists and creatives,” he wrote in a paper co-authored last year with Ana Dinerstein of Bath University, “since for these groups the alternatives [to traditional work] require little adaptation.”
Pitts also argues that post-work’s optimistic visions can be a way of avoiding questions about power in the world. “A post-work society is meant to resolve conflicts between different economic interest groups – that’s part of its appeal,” he told me. Tired of the never-ending task of making work better, some socialists have latched on to post-work, he argues, in the hope that exploitation can finally be ended by getting rid of work altogether. He says this is both “defeatist” and naive: “Struggles between economic interest groups can’t ever be entirely resolved.”
Yet Pitts is much more positive about post-work’s less absolutist proposals, such as redistributing working hours more equally. “There has to be a major change to work,” he says. “In that sense, these people want the right thing.” Other critics of post-work are also less dismissive than they first sound. Despite being a Tory MP from the most pro-business wing of his party, Nick Boles accepts in his book that a future society “may redefine work to include child-rearing and taking care of elderly relatives, and finally start valuing these contributions properly”. Post-work is spreading feminist ideas to new places.
Hunnicutt, the historian of work, sees the US as more resistant than other countries to post-work ideas – at least for now. When he wrote an article for the website Politico in 2014 arguing for shorter working hours, he was shocked by the reaction it provoked. “It was a harsh experience,” he says. “There were personal attacks by email and telephone – that I was some sort of communist and devil-worshipper.” Yet he senses weakness behind such strenuous efforts to shut the work conversation down. “The role of work has changed profoundly before. It’s going to change again. It’s probably already in the process of changing. The millennial generation know that the Prince Charming job, that will meet all your needs, has gone.”
After meeting Pitts in Bristol, I went to a post-work event there organised by Autonomy. It was a bitter Monday evening, but liberal Bristol likes social experiments and the large city-centre room was almost full. There were students, professionals in their 30s, even a middle-aged farmer. They listened attentively for two hours while Frayne and two other panellists listed the oppressions of work and then hazily outlined what could replace it. When the audience finally asked questions, they all accepted the post-workists’ basic premises. An appetite for a society that treats work differently certainly exists. But it is not, so far, overwhelming: the evening’s total attendance was less than 70.
And yet, as Frayne points out, “in some ways, we’re already in a post-work society. But it’s a dystopic one.” Office employees constantly interrupting their long days with online distractions; gig-economy workers whose labour plays no part in their sense of identity; and all the people in depressed, post-industrial places who have quietly given up trying to earn – the spectre of post-work runs through the hard, shiny culture of modern work like hidden rust.
Last October, research by Sheffield Hallam University revealed that UK unemployment is three times higher than the official count of those claiming the dole, thanks to people who are either “economically inactive” – no longer seeking work – or receiving incapacity benefits. When Frayne is not talking and writing about post-work, or doing his latest temporary academic job, he sometimes makes a living collecting social data for the Welsh government in former mining towns. “There is lots of worklessness,” he says, “but with no social policies to dignify it.”
Creating a more benign post-work world will be more difficult now than it would have been in the 70s. In today’s lower-wage economy, suggesting people do less work for less pay is a hard sell. As with free-market capitalism in general, the worse work gets, the harder it is to imagine actually escaping it, so enormous are the steps required.
But for those who think work will just carry on as it is, there is a warning from history. On 1 May 1979, one of the greatest champions of the modern work culture, Margaret Thatcher, made her final campaign speech before being elected prime minister. She reflected on the nature of change in politics and society. “The heresies of one period,” she said, always become “the orthodoxies of the next”. The end of work as we know it will seem unthinkable – until it has happened.
Main illustration: Nathalie Lees
• Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.
EDB - 學校搜尋
</head>
http://www.w3.org/WAI/WCAG2AA-Conformance" rel="external" id="btn-wcag" class="r">
http://validator.w3.org/check?uri=referer" rel="external" id="btn-w3c" class="r">
http://www.ipv6forum.com" rel="external" class="r">
EDB - 學校搜尋
</head>
http://www.w3.org/WAI/WCAG2AA-Conformance" rel="external" id="btn-wcag" class="r">
http://validator.w3.org/check?uri=referer" rel="external" id="btn-w3c" class="r">
http://www.ipv6forum.com" rel="external" class="r">
https://codeburst.io/full-stack-single-page-application-with-vue-js-and-flask-b1e036315532
https://www.reddit.com/r/flask/comments/6kuhjk/flask_api_backend_vuejs_frontend/
https://github.com/yymm/flask-vuejs
I wanted a bit different case. What if I need a single page application built with Vue.js (using single page components, vue-router in HTML5 history mode and other good features) and served over Flask web server? In few words this should works as follows:
-Flask serves my index.html which contains my Vue.js app
-during front-end development I use Webpack with all the cool features it provides
-Flask has API endpoints I can access from my SPA
-I can access API endpoints even while I run Node.js for front-end development
24 ainiriand 14 hrs 1
http://thesoliditydev.com/contract/update/2018/01/17/lottery/
Come in fellow humans to this first example of smart contract. You can view the full source in Github and follow along.
If you are completely new to smart contract development, please go to ethereumdev.io to get the basics. It should not take long to learn, it took me ~2 weeks to learn the basics and run some experiments but your mileage may vary.
The best resource for debugging is the Remix IDE
The purpose of this contract is to run a simple lottery where the contract creator is in charge of supplying the lucky numbers. I will discuss later on why generating random numbers in a smart contract is almost never a good idea. Now, let me explain the structure of this exercise and some tools involved. The contract written in solidity (.sol file) is loaded and deployed with Python (>3.4) using Web3.py.
Lets go step by step so you can follow along:
All the heavy work is done by contract_loader.py. This file is a dummy class (to be improved) to encapsulate some of the most common work around contract loading and deployment. This file is imported and executed through a helper script that is also in charge of running our contract’s methods. We use a class for interacting with the Ethereum network called blockchain.py. This file connects to out testrpc service running in our local machine and performs some low level operations like deploying and getting accounts. When the contract is deployed, we receive a tx_hash from the network acknowledging the transaction. This particular ‘electronic receipt’ can be used as many times as you want to recover the contract from the network using blockchain.py class. Method get_contract_instance() is then used to access the contract instance. All the Solidity methods are then accessible now at python level using the instance returned by this method. Lottery general discussion
There is nothing really fancy in this contract. Like in most Solidity contracts, most of the coding is spent dealing with array logic. This particular Lottery runs for as long as the owner wants and is closed by supplying the winning numbers manually. The winners are then copied to a separate array to split the prize evenly. For a complete discussion on how to securely generate a random number in a Solidity contract, please refer to this thread.
Final words
Please, feel free to send me any questions you might have regarding smart contracts or this one in particular. I will be more than happy to help => me @ jesusfloressanjose.com
911 tomcam 2 days 189
news.ycombinator.com/item?id=16160394
This site is dedicated to helping you start your own Internet Service Provider. Specifically, this guide is about building a Wireless ISP (WISP).
This guide is focused on the very earliest stages of starting a WISP - determining feasibility up through connecting the first few customers. There are many challenges that will come up at 100, 1,000 or 10,000 customers that are not (yet) covered in this guide.
For context, this site is the result of this discussion on Hacker News.
Join the discussion! Chat with me (the author) and others interested in this kind of thing here: #startyourownisp:matrix.org.
This site is a work in progress!
Only some of the content is up so far and there’s still some bugs in the interface. Use the form on the bottom left to be notified of new updates.
Getting Started
What is a WISP? And why might you want to build one? Also defines some terminology.
Costs What does it cost to build a wireless Internet Service Provider? (Link to a Google Sheet that you can copy and customize.)
About Me Who am I? Why am I doing this?
Step by Step Guide
Step 1: Evaluate an Area: Make sure your area is a good candidate for a Wireless Internet network.
Step 2: Find a Fiber Provider: Find a building where you can purchase a fiber connection and use the rooftop to start your wireless network.
Step 3: Find Relay Sites: Extend your network wirelessly toward your customers.
Step 4: Pick a Hardware Platform: Evaluate available options for wireless hardware.
Step 5: Billing and Customer Management: Make sure you’re able to get paid and support your customers.
Step 6: Network Topology: Design your network topology to make your network reliable and scalable. Routers, switches, IP addresses, VLANs, etc.
Step 7: Build your Infrastructure: Install hardware for your fiber connection and your relay sites.
Step 8: Install a Customer: Get your first customer online!
Step 9: Marketing: Let people know about your service so they can experience a better Internet connection!
Step 10: Maintenance: Keep your network running smoothly.
Miscellaneous
Tools you’ll want to have A list of the tools you’ll need to install relays sites and customers.
Aim a Backhaul A guide describing the proper techniques for aiming backhauls. Designed to be printed out and taken to the site for reference.
Backhaul Picker If you just need to get a solid wireless connection from Point A to Point B then use this interactive guide to pick the right equipment and get it set up.
Channel Planning Avoid self interference by carefully choosing channels for your access points and backhauls.
MDUs (Multiple Dwelling Units) Best practices for providing service to apartment buildings, condos, attached townhomes, etc.
Guide to Google Earth Some tips and tricks for using Google Earth to plan and build your network.
Weather Proof your Network Rain, snow, ice and wind can all cause problems for a wireless network.
Channel Planning
Roof and Ladder Safety Stay safe out there!
© 2018 startyourownisp.com – Documentation built with Hugo using the Material theme.
156 jonny_eh 1 day 111
https://marco.org/2018/01/17/end-of-conference-era
news.ycombinator.com/item?id=16173031 January 17, 2018 ∞https://marco.org/2018/01/17/end-of-conference-era
Chris Adamson notes a significant contraction in iOS and related conferences recently (via Michael Tsai).
Having attended (and sometimes spoken at) many of these conferences over the years, I can’t deny the feeling I’ve had in the last couple of years that the era of the small Apple-ish developer-ish conference is mostly or entirely behind us.
I don’t think that’s a bad thing. This style of conference had a great run, but it always had major and inherent limitations, challenges, and inefficiencies:
Cost: With flights, lodging, and the ticket adding up to thousands of dollars per conference, most people are priced out. The vast majority of attendees’ money isn’t even going to the conference organizers or speakers — it’s going to venues, hotels, and airlines. Size: There’s no good size for a conference. Small conferences exclude too many people; big conferences impede socialization and logistics. Logistics: Planning and executing a conference takes such a toll on the organizers that few of them have ever lasted more than a few years. Format: Preparing formal talks with slide decks is a massively inefficient use of the speakers’ time compared to other modern methods of communicating ideas, and sitting there listening to blocks of talks for long stretches while you’re trying to stay awake after lunch is a pretty inefficient way to hear ideas. It’s getting increasingly difficult for organizers to sell tickets, in part because it’s hard to get big-name speakers without the budget to pay them much (which would significantly drive up ticket costs, which exacerbates other problems), but also because conferences now have much bigger competition in connecting people to their colleagues or audiences.
There’s no single factor that has made it so difficult, but the explosion of podcasts and YouTube over the last few years must have contributed significantly. Podcasts are a vastly more time-efficient way for people to communicate ideas than writing conference talks, and people who prefer crafting their message as a produced piece or with multimedia can do the same thing (and more) on YouTube. Both are much easier and more versatile for people to consume than conference talks, and they can reach and benefit far more people.
Ten years ago, you had to go to conferences to hear most prominent people in our industry speak in their own voice, or to get more content than an occasional blog post. Today, anyone who could headline a conference probably has a podcast or YouTube channel with hours of their thoughts and ideas available to anyone, anywhere in the world, anytime, for free.
But all of that media can’t really replace the socializing, networking, and simply fun that happened as part of (or sometimes despite) the conference formula.
I don’t know how to fix conferences, but the first place I’d start on that whiteboard is by getting rid of all of the talks, then trying to find different ways to bring people together — and far more of them than before.
Or maybe we’ve already solved these problems with social networks, Slack groups, podcasts, and YouTube, and we just haven’t fully realized it yet.
Follow Marco.org posts: Twitter, RSS feed, or the alternate RSS feed in which link posts always point here first instead of their targets. Follow @marcoarment on Twitter if you’d like.
© 2006–2018 Marco Arment
380 mdturnerphys 1 day 86
Microsoft MakeCode brings computer science to life for all students with fun projects, immediate results, and both block and text editors for learners at different levels.
95 enkiv2 14 hrs 32
https://github.com/jedisct1/piknik news.ycombinator.com/item?id=16176666 You can't perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
34 kawera 1 day 4
https://hanxiao.github.io/2018/01/10/Build-Cross-Lingual-End-to-End-Product-Search-using-Tensorflow/
news.ycombinator.com/item?id=16173143
Product search is one of the key components in an online retail store. Essentially, you need a system that matches a text query with a set of products in your store. A good product search can understand user’s query in any language, retrieve as many relevant products as possible, and finally present the result as a list, in which the preferred products should be at the top, and the irrelevant products should be at the bottom.
Unlike text retrieval (e.g. Google web search), products are structured data. A product is often described by a list of key-value pairs, a set of pictures and some free text. In the developers’ world, Apache Solr and Elasticsearch are known as de-facto solutions for full-text search, making them a top contender for building e-commerce product search.
At the core, Solr/Elasticsearch is a symbolic information retrieval (IR) system. Mapping query and document to a common string space is crucial to the search quality. This mapping process is an NLP pipeline implemented with Lucene Analyzer. In this post, I will reveal some drawbacks of such a symbolic-pipeline approach, and then present an end-to-end way to build a product search system from query logs using Tensorflow. This deep learning based system is less prone to spelling errors, leverages underlying semantics better, and scales out to multiple languages much easier.
Table of Content
Recap Symbolic Approach for Product Search
Let’s first do a short review of the classic approach. Typically, an information retrieval system can be divided into three tasks: indexing, parsing and matching. As an example, the next figure illustrates a simple product search system:
indexing: storing products in a database with attributes as keys, e.g. brand, color, category; parsing: extracting attribute terms from the input query, e.g. red shirt -> {"color": "red", "category": "shirt"}; matching: filtering the product database by attributes. If there is no attribute found in the query, then the system fallbacks to exact string matching, i.e. searching every possible occurrence in the database. Note that, parsing, and matching must be done for each incoming query, whereas indexing can be done less frequently depending on the stock update speed.
Many existing solutions such as Apache Solr and Elasticsearch follow this simple idea, except they employ more sophisticated algorithms (e.g. Lucene) for these three tasks. Thanks to these open-source projects many e-commerce businesses are able to build product search on their own and serve millions of requests from customers.
Symbolic IR System
Note, at the core, Solr/Elasticsearch is a symbolic IR system that relies on the effective string representation of the query and product. By parsing or indexing, the system knows which tokens in the query or product description are important. These tokens are the primitive building blocks for matching. Extracting important tokens from the original text is usually implemented as a NLP pipeline, consisting of tokenization, lemmatization, spelling correction, acronym/synonym replacement, named-entity recognition and query expansion.
Formally, given a query $q\in \mathcal{Q}$ and a product $p\in\mathcal{P}$, one can think the NLP pipeline as a predefined function that maps from $\mathcal{Q}$ or $\mathcal{P}$ to a common string space $\mathcal{S}$, i.e. $f: \mathcal{Q}\mapsto \mathcal{S}$ or $g: \mathcal{P}\mapsto \mathcal{S}$, respectively. For the matching task, we just need a metric $m: \mathcal{S} \times \mathcal{S} \mapsto [0, +\infty)$ and then evaluate $m\left(f(q),g(p)\right)$, as illustrated in the figure below.
Pain points of Symbolic IR System
If you are a machine learning enthusiast who believes everything should be learned from data, you must have tons of questions about the last figure. To name a few:
Why are $f$ and $g$ predefined? Why can’t we learn $f$ and $g$ from data? Why is $\mathcal{S}$ a string space? Why can’t it be a vector space? Why is $m$ a string/key matching function? Why can’t we use more well-defined math function, e.g. Euclidean distance, cosine function? Wait, why don’t we just learn $m$? In fact, these questions reveal two pain points of a symbolic IR system.
The NLP Pipeline in Solr/Elasticsearch is based on the Lucene Analyzer class. A simple analyzer such as StandardAnalyzer would just split the sequence by whitespace and remove some stopwords. Quite often you have to extend it by adding more and more functionalities, which eventually results in a pipeline as illustrated in the figure below.
While it looks legit, my experience is that such NLP pipeline suffers from the following drawbacks:
The system is fragile. As the output of every component is the input of the next, a defect in the upstream component can easily break down the whole system. For-example,canyourtoken izer split thiscorrectly⁇ Dependencies between components can be complicated. A component can take from and output to multiple components, forming a directed acyclic graph. Consequently, you may have to introduce some asynchronous mechanisms to reduce the overall blocking time. It is not straightforward to improve the overall search quality. An improvement in one or two components does not necessarily improve the end-user search experience. The system doesn’t scale-out to multiple languages. To enable cross-lingual search, developers have to rewrite those language-dependent components in the pipeline for every language, which increases the maintenance cost. 2. Symbolic System does not Understand Semantics without Hard Coding
A good IR system should understand trainer is sneaker by using some semantic knowledge. No one likes hard coding this knowledge, especially you machine learning guys. Unfortunately, it is difficult for Solr/Elasticsearch to understand any acronym/synonym unless you implement SynonymFilter class, which is basically a rule-based filter. This severely restricts the generalizability and scalability of the system, as you need someone to maintain a hard-coded language-dependent lexicon. If one can represent query/product by a vector in a space learned from actual data, then synonyms and acronyms could be easily found in the neighborhood without hard coding.
Neural IR System
With aforementioned problems in mind, my motivation is twofold:
eliminate the NLP pipeline to make the system more robust and scalable; find a space for query and product that can better represent underlying semantics. The next figure illustrates a neural information retrieval framework, which looks pretty much the same as its symbolic counterpart, except that the NLP pipeline is replaced by a deep neural network and the matching job is done in a learned common space. Now $f$ serves as a query encoder, $g$ serves as a product encoder.
End-to-End Model Training
There are several ways to train a neural IR system. One of the most straightforward (but not necessarily the most effective) ways is end-to-end learning. Namely, your training data is a set of query-product pairs feeding on the top-right and top-left blocks in the last figure. All the other blocks such as $f$, $g$, $m$ and $\mathcal{S}$ are learned from data. Depending on the engineering requirements or resource limitations, one can also fix or pre-train some of the components.
Where Do Query-Product Pairs Come From?
To train a neural IR system in an end-to-end manner, your need some associations between query and product such as the query log. This log should contain what products a user interacted with (click, add to wishlist, purchase) after typing a query. Typically, you can fetch this information from the query/event log of your system. After some work on segmenting (by time/session), cleaning and aggregating, you can get pretty accurate associations. In fact, any user-generated text can be good association data. This includes comments, product reviews, and crowdsourcing annotations. The next figure shows an example of what German and British users clicked after searching for ananas and pineapple on Zalando, respectively.
Increasing the diversity of training data source is beneficial to a neural IR system, as you certainly want the system to generalize more and not to mimic the behavior of the symbolic counterpart. On the contrary, if your only data source is the query log of a symbolic IR system, then your neural IR system is inevitably biased. The performance of your final system highly depends on the ability of the symbolic system. For example, if your current symbolic system doesn’t correct spell mistakes and returns nothing when user types adidaas, then you won’t find any product associated with adidaas from the query log. As a consequence, your neural IR system is unlikely to learn the ability of spell checking.
In that sense, we are “bootstrapping” the symbolic IR system to build a neural IR system. Given enough training data, we hope that some previously hard-coded rules or manually coded functions can be picked up and generalized by deep neural networks
What about Negative Query-Product Pairs?
At some point, you will probably need negative query-product pairs to train a neural IR system more effectively. In general, negative means that a product is irrelevant to the query. A straightforward way is just random sampling all products, hoping that no positive product gets accidentally sampled. It is easy to implement and actually not a bad idea in practice. More sophisticated solutions could be collecting those products that generate impressions on customers yet not receive any clicks as negative ones. This requires some collaborations between you, the frontend team and the logging team, making sure those no-click items are really uninterested to users, not due to screen resolution, lazy loading, etc.
If you are looking for a more formal and sounding solution, then Positive-Unlabeled Learning (PU learning) could be interesting to you. Instead of relying on the heuristics for identifying negative data, PU learning regards unlabeled data as negative data with smaller weights. “Positive-Unlabeled Learning with Non-Negative Risk Estimator” is a nice paper about the unbiased PU learning published in NIPS 2017.
Symbolic vs. Neural IR System
Before I dive into details, let’s take a short break. As you can see I spent quite some effort on explaining symbolic and neural IR systems. This is because the symbolic system is such a classic way to do IR, and developers get used to it. With the help of Apache Solr, Elasticsearch and Lucene, medium and small e-commerce businesses are able to build their own product search in a short time. It is the de-facto solution. On the other hand, Neural IR is a new concept emerging just recently. There are not so many off-the-shelf packages available. Plus, training a neural IR system requires some data. The next table summarizes the pros and cons of two systems.
Symbolic IR system Neural IR system Pros Efficient in query-time;straightforward to implement;results are interpretable; many off-the-shelf packages.
Automatic;resilient to noise;scale-out easily; requires little domain knowledge.
Cons Fragile;Hard-coded knowledge; high maintenance costs.
Less efficient in query-time;hard to add business rules; requires a lot of data.
This is not a Team Symbol or Team Neural choice. Both systems have their own advantages and can complement each other pretty well. Therefore, a better solution would be combining these two systems in a way so that we can enjoy all advantages from both sides.
Neural Network Architecture
The next figure illustrates the architecture of the neural network. The proposed architecture is composed of multiple encoders, a metric layer, and a loss layer. First, input data is fed to the encoders which generate vector representations. Note that, product information is encoded by an image encoder and an attribute encoder. In the metric layer, we compute the similarity of a query vector with an image vector and an attribute vector, respectively. Finally, in the loss layer, we compute the difference of similarities between positive and negative pairs, which is used as the feedback to train encoders via backpropagation.
In the last figure, I labeled one possible model for each component, but the choices are quite open. For the sake of clarity, I will keep the model as simple as possible and briefly go through each component.
Query Encoder
Here we need a model that takes in a sequence and output a vector. Besides the content of a sequence, the vector representation should also encode language information and be resilient to misspellings. The character-RNN (e.g LSTM, GRU, SRU) model is a good choice. By feeding RNN character by character, the model becomes resilient to misspelling such as adding/deleting/replacing characters. The misspelled queries would result in a similar vector representation as the genuine one. Moreover, as European languages (e.g. German and English) share some Unicode characters, one can train queries from different languages in one RNN model. To distinguish the words with the same spelling but different meanings in two languages, such as German rot (color red) and English rot, one can prepend a special character to indicate the language of the sequence, e.g. 🇩🇪 rot and 🇬🇧 rot.
Using characters instead of words as model input means that your system is unlikely to meet an out-of-vocabulary word. Any input will be encoded into a vector representation. Consequently, the system has a good recall rate, as it will always return some result regardless the sanity of the input. Of course, the result could be meaningless. However, if a customer is kind and patient enough to click on one relevant product, the system could immediately pick up this signal from the query log as a positive association, retrain the model and provide better results in the next round. In that sense, we close the loop between feedback to users and learning from users.
Note that, a query can be compositional. It may contain multiple words and describe multiple attributes, such as nike sneaker (brand + category) and nike air max (brand + product name). Unfortunately, it is difficult for a plain character RNN to capture the high-order dependency and concept, as its resolution is limited to a single character. To solve this problem, I stack multiple dilated recurrent layers with hierarchical dilations to construct a Dilated Recurrent Neural Networks, which learns temporal dependencies of different scales at different layers. The next figure illustrates a three-layer DilatedRNN with dilation up to 4.
An implementation of dilated RNN using static_rnn API can be found here. The query representation is the last output from dilated RNN, which can be obtained via:
encoder_cell = [LSTMCell(num_hidden) for _ in range(len(dilations))] q_r = get_last_output_dRNN(input=X_query, cells=encoder_cell, dilations=dilations) To speed up training, one can also replace Tensorflow’s LSTMCell by recently proposed Simple Recurrent Unit (SRU). According to the paper, SRU is 5-10x faster than an optimized LSTM implementation. The code can be found here.
If you are interested in extending query encoder further, such as adding a more complicated high-order dependency or integrating side information in each recursion step, please read my blog post on “Why I Use raw_rnn Instead of dynamic_rnn in Tensorflow and So Should You”.
Image Encoder
The image encoder rests on purely visual information. The RGB image data of a product is fed into a multi-layer convolutional neural network based on the ResNet architecture, resulting in an image vector representation in 128-dimensions.
Attribute Encoder
The attributes of a product can be combined into a sparse one-hot encoded vector. It is then supplied to a four-layer, fully connected deep neural network with steadily diminishing layer size. Activation was rendered nonlinear by standard ReLUs, and drop-out is applied to address overfitting. The output yields attribute vector representation in 20 dimensions.
Some readers may question the necessarity of having image and attribute encoders at the same time. Isn’t an attribute encoder enough? If you think about search queries in the e-commerce context, especially in the fashion e-commerce I’m working in, queries can be loosely divided into two categories: “attribute” queries such as nike red shoes of which all words are already presented in the product database as attributes, and “visual” queries such as tshirt logo on back, typical berlin that express more visual or abstract intent from user and those words never show up in the product database. The former can be trained with attribute encoder only, whereas the latter requires image encoder for effective training. Having both encoders allows some knowledge transfer between them during the training time, which improves the overall performance. Metric & Loss Layer
After a query-product pair goes through all three encoders, one can obtain a vector representation $q$ of the query, an image representation $u$ and an attribute representation $v$ of the product. It is now the time to squeeze them into a common latent space. In the metric layer, we need a similarity function $m$ which gives higher value to the positive pair than the negative pair, i.e. $m(q, u^+, v^+) > m(q, u^-, v^-)$. The absolute value of $m(q, u^+, v^+)$ does not bother us too much. We only care about the relative distances between positive and negative pairs. In fact, a larger difference is better for us, as a clearer separation between positive and negative pairs can enhance the generalization ability of the system. As a consequence, we need a loss function $\ell$ which is inversely proportional to the difference between $m(q, u^+, v^+)$ and $m(q, u^-, v^-)$. By splitting $q$ (148-dim) into $q^{\mathrm{img}}$ (128-dim) and $q^{\mathrm{attr}}$ (20-dim), we end up minimizing the following loss function: $${\begin{aligned}&\sum_{\tiny\begin{array}{c} 0<i<N\ 0<j<|q_{i}^{+}| \ 0<k<|q_{i}^{-}|\end{array}}\lambda\ell\left(m(q^{\mathrm{img}}i, u{i,j}^{+}), m(q^{\mathrm{img}}i, u{i,k}^{-})\right) \ &+ (1-\lambda)\ell\left(m(q^{\mathrm{attr}}i, v{i,j}^{+}), m(q^{\mathrm{attr}}i, v{i,k}^{-})\right),\end{aligned}}$$
where $N$ is the total number of queries. $|q_{i}^{+}|$ and $|q_{i}^{-}|$ are the number of positive and negative products associated with query $i$, respectively. Hyperparameter $\lambda$ trades off between image information and attribute information. For functions $\ell$ and $g$, the options are:
Loss function $\ell$: logistic, exponential, hinge loss, etc. Metric function $m$: cosine similarity, euclidean distance (i.e. $\ell_2$-norm), MLP, etc. To understand how it ends up with the above loss function, I strongly recommend you read my other blog post on “Optimizing Contrastive/Rank/Triplet Loss in Tensorflow for Neural Information Retrieval”. It also explains the metric and loss layer implementation in details.
Inference
For a neural IR system, doing inference means serving search requests from users. Since products are updated regularly (say once a day), we can pre-compute the image representation and attribute representation for all products and store them. During the inference time, we first represent user input as a vector using query encoder; then iterate over all available products and compute the metric between the query vector and each of them; finally, sort the results. Depending on the stock size, the metric computation part could take a while. Fortunately, this process can be easily parallelized.
Training and Evaluation Scheme
The query-product dataset is partitioned into four sets as illustrated in the next figure.
Data in the orange block is used to train the model, and the evaluation is performed on Test I set. In this way, the model can’t observe any query or product used for training during the test time. For evaluation, we feed the query to the network and return a sorted list of test products. Then we check how groundtruth products are ranked in the results. Some widely-used measurements include: mean average precision (MAP), mean reciprocal rank (MRR), precision@1, precision@1%, negative discounted cumulative gain (NDCG) etc. A comprehensive explanation of these metrics can be found in this slides. With Estimator and Data API in Tensorflow 1.4, you can easily define the training and evaluation procedure as follows:
model = tf.estimator.Estimator(model_fn=neural_ir, params=params) train_spec = tf.estimator.TrainSpec(input_fn=lambda: input_data.input_fn(ModeKeys.TRAIN)) eval_spec = tf.estimator.EvalSpec(input_fn=lambda: input_data.input_fn(ModeKeys.EVAL)) tf.estimator.train_and_evaluate(model, train_spec, eval_spec) Test II or Test III sets can be also used for evaluation to check how the model generalizes on unseen products or unseen queries, respectively.
Qualitative Results
Here I will not present any quantitative result. After all, it is a blog, not an academic paper and the goal is mainly to introduce this new idea of neural IR system. So let’s look at some results that are easy to your eyes. This actually poses a good question: how can you tell an IR system is (not) working by visual inspection only?
Personally, I call an IR system “working” if it meets these two basic conditions:
it understands singleton query described by a basic concept, e.g. brand, color, category; it understands compositional query described by multiple concepts, e.g. brand + color, brand + color + category + product name. If it fails to meet these two conditions, then I don’t bother to check fancy features such as spell-checking and cross-lingual. Enough said, here are some search results.
Query & Top-20 Results 🇩🇪 nike
🇩🇪 schwarz (black)
🇩🇪 nike schwarz
🇩🇪 nike schwarz shirts
🇩🇪 nike schwarz shirts langarm (long-sleeved)
🇬🇧 addidsa (misspelled brand)
🇬🇧 addidsa trosers (misspelled brand and category)
🇬🇧 addidsa trosers blue shorrt (misspelled brand and category and property)
🇬🇧 striped shirts woman
🇬🇧 striped shirts man
🇩🇪 kleider (dress)
🇩🇪 🇬🇧 kleider flowers (mix-language)
🇩🇪 🇬🇧 kleid ofshoulder (mix-language & misspelled off-shoulder)
Here I demonstrated some (cherry-picked) results for different types of query. It seems that the system goes in the right direction. It is exciting to see that the neural IR system is able to correctly interpret named-entity, spelling errors and multilinguality without any NLP pipeline or hard-coded rule. However, one can also notice that some top ranked products are not relevant to the query, which leaves quite some room for improvement.
Speed-wise, the inference time is about two seconds per query on a quad-core CPU for 300,000 products. One can further improve the efficiency by using model compression techniques.
Summary
If you are a search developer who is building a symbolic IR system with Solr/Elasticsearch/Lucene, this post should make you aware of the drawbacks of such a system. This post should also answer your What? Why? and How? questions regarding a neural IR system. Comparing to the symbolic counterpart, the new system is more resilient to the input noise and requires little domain knowledge about the products and languages. Nonetheless, one should not take it as a Team Symbol or Team Neural kind of choice. Both systems have their own advantages and can complement each other pretty well. A better solution would be combining these two systems in a way that we can enjoy all advantages from both sides.
Some implementation details and tricks are omitted here but can be found in my other posts. I strongly recommend readers to continue with the following posts:
Last but not least, the open-source project MatchZoo contains many state-of-the-art neural IR algorithms. In addition to product search, you may find its application in conversational chatbot, question-answer system.
14 ghosh 9 hrs 5
WhatsApp Business App
WhatsApp Business is an Android app which is free to download, and was built with the small business owner in mind. With the app, businesses can interact with customers easily by using tools to automate, sort, and quickly respond to messages.